57 Sources
[1]
Ted Cruz plan to punish states that regulate AI shot down in 99-1 vote
Facing overwhelming opposition from both Democrats and Republicans, Sen. Ted Cruz (R-Texas) accepted defeat and joined a 99-1 vote against his own plan to punish states that regulate artificial intelligence. "The Senate came together tonight to say that we can't just run over good state consumer protection laws," Sen. Maria Cantwell (D-Wash.) said. The Cruz plan would have thwarted state laws related to robocalls, deepfakes, and autonomous vehicles, she said. The House previously approved a budget bill with a provision to ban state AI regulation for 10 years. The Senate has a rule against including "extraneous matter" in budget reconciliation legislation, which Cruz tried to get around by proposing a 10-year moratorium in which states would be shut out of a $42 billion broadband deployment fund if they try to regulate AI. The Senate passed the overall budget bill today in a 51-50 vote. Cruz's office said in early June that his proposal aimed to prevent states "from strangling AI deployment with EU-style regulation." Less than three weeks later, his home state of Texas enacted a law regulating the use of artificial intelligence. Cruz changed his plan, saying that states regulating AI would only be shut out of a $500 million AI fund instead of the $42 billion broadband fund. But Cantwell's office said the new version contained a backdoor that could still threaten states' access to the entire broadband fund. Sen. Marsha Blackburn (R-Tenn.) teamed up with Cantwell to fight Cruz's plan. Blackburn briefly reached a compromise with Cruz on a five-year moratorium that would allow some forms of AI regulation but then decided the compromise wasn't good enough. "While I appreciate Chairman Cruz's efforts to find acceptable language that allows states to protect their citizens from the abuses of AI, the current language is not acceptable to those who need these protections the most," Blackburn said in a statement quoted by Politico last night. "This provision could allow Big Tech to continue to exploit kids, creators, and conservatives. Until Congress passes federally preemptive legislation like the Kids Online Safety Act and an online privacy framework, we can't block states from making laws that protect their citizens."
[2]
Congress might block state AI laws for five years. Here's what it means. | TechCrunch
A federal proposal that would ban states and local governments from regulating AI for five years could soon be signed into law, as Sen. Ted Cruz (R-TX) and other lawmakers work to secure its inclusion into a GOP megabill -- which the Senate is voting on Monday -- ahead of a key July 4 deadline. Those in favor -- including OpenAI's Sam Altman, Anduril's Palmer Luckey, and a16z's Marc Andreessen -- argue that a "patchwork" of AI regulation among states would stifle American innovation at a time when the race to beat China is heating up. Critics include most Democrats, many Republicans, Anthropic's CEO Dario Amodei, labor groups, AI safety nonprofits, and consumer rights advocates. They warn that this provision would block states from passing laws that protect consumers from AI harms and would effectively allow powerful AI firms to operate without much oversight or accountability. On Friday, a group of 17 Republican governors wrote to Senate Majority Leader John Thune, who has advocated for a "light touch" approach to AI regulation, and House Speaker Mike Johnson calling for the so-called "AI moratorium" to be stripped from the budget reconciliation bill, per Axios. The provision was squeezed into the bill, nicknamed the "Big Beautiful Bill," in May. It was initially designed to prohibit states from "[enforcing] any law or regulation regulating [AI] models, [AI] systems, or automated decision systems" for a decade. However, over the weekend, Cruz and Sen. Marsha Blackburn (R-TN), who has also criticized the bill, agreed to shorten the pause on state-based AI regulation to five years. The new language also attempts to exempt laws addressing child sexual abuse materials, children's online safety, and an individual's rights to their name, likeness, voice, and image. However, the amendment says the laws must not place an "undue or disproportionate burden" on AI systems -- legal experts are unsure how this would impact state AI laws. Such a measure could preempt state AI laws that have already passed, such as California's AB 2013, which requires companies to reveal the data used to train AI systems, and Tennessee's ELVIS Act, which protects musicians and creators from AI-generated impersonations. But the moratorium's reach extends far beyond these examples. Public Citizen has compiled a database of AI-related laws that could be affected by the moratorium. The database reveals that many states have passed laws that overlap, which could actually make it easier for AI companies to navigate the "patchwork." For example, Alabama, Arizona, California, Delaware, Hawaii, Indiana, Montana, and Texas have criminalized or created civil liability for distributing deceptive AI-generated media meant to influence elections. The AI moratorium also threatens several noteworthy AI safety bills awaiting signature, including New York's RAISE Act, which would require large AI labs nationwide to publish thorough safety reports. Getting the moratorium into a budget bill has required some creative maneuvering. Because provisions in a budget bill must have a direct fiscal impact, Cruz revised the proposal in June to make compliance with the AI moratorium a condition for states to receive funds from the $42 billion Broadband Equity Access and Deployment (BEAD) program. Cruz released another revision last week, which he says ties the requirement only to the new $500 million in BEAD funding included in the bill -- a separate, additional pot of money. However, close examination of the revised text finds the language also threatens to pull already obligated broadband funding from states that don't comply. Sen. Maria Cantwell (D-WA) previously criticized Cruz's reconciliation language, claiming the provision "forces states receiving BEAD funding to choose between expanding broadband or protecting consumers from AI harms for ten years." As of Monday, the Senate is engaged in a vote-a-rama -- a series of rapid votes on the budget bill's full slate of amendments. The new language that Cruz and Blackburn agreed on will be included in a broader amendment, one that Republicans are expected to pass on a party line vote. Senators will also likely vote on a Democrat-backed amendment to strip the entire section, sources familiar with the matter told TechCrunch. Chris Lehane, chief global affairs officer at OpenAI, said in a LinkedIn post that the "current patchwork approach to regulating AI isn't working and will continue to worsen if we stay on this path." He said this would have "serious implications" for the U.S. as it races to establish AI dominance over China. "While not someone I'd typically quote, Vladimir Putin has said that whoever prevails will determine the direction of the world going forward," Lehane wrote. OpenAI CEO Sam Altman shared similar sentiments last week during a live recording of the tech podcast Hard Fork. He said while he believes some adaptive regulation that addresses the biggest existential risks of AI would be good, "a patchwork across the states would probably be a real mess and very difficult to offer services under." Altman also questioned whether policymakers were equipped to handle regulating AI when the technology moves so quickly. "I worry that if ... we kick off a three-year process to write something that's very detailed and covers a lot of cases, the technology will just move very quickly," he said. But a closer look at existing state laws tells a different story. Most state AI laws that exist today aren't far-reaching; they focus on protecting consumers and individuals from specific harms, like deepfakes, fraud, discrimination, and privacy violations. They target the use of AI in contexts like hiring, housing, credit, healthcare, and elections, and include disclosure requirements and algorithmic bias safeguards. TechCrunch has asked Lehane and other members of OpenAI's team if they could name any current state laws that have hindered the tech giant's ability to progress its technology and release new models. We also asked why navigating different state laws would be considered too complex, given OpenAI's progress on technologies that may automate a wide range of white-collar jobs in the coming years. TechCrunch asked similar questions of Meta, Google, Amazon, and Apple, but has not received any answers. "The patchwork argument is something that we have heard since the beginning of consumer advocacy time," Emily Peterson-Cassin, corporate power director at internet activist group Demand Progress, told TechCrunch. "But the fact is that companies comply with different state regulations all the time. The most powerful companies in the world? Yes. Yes, you can." Opponents and cynics alike say the AI moratorium isn't about innovation -- it's about sidestepping oversight. While many states have passed regulation around AI, Congress, which moves notoriously slowly, has passed zero laws regulating AI. "If the federal government wants to pass strong AI safety legislation, and then preempt the states' ability to do that, I'd be the first to be very excited about that," said Nathan Calvin, VP of state affairs at the nonprofit Encode -- which has sponsored several state AI safety bills -- in an interview. "Instead, [the AI moratorium] takes away all leverage, and any ability, to force AI companies to come to the negotiating table." One of the loudest critics of the proposal is Anthropic CEO Dario Amodei. In an opinion piece for The New York Times, Amodei said "a 10-year moratorium is far too blunt an instrument." "AI is advancing too head-spinningly fast," he wrote. "I believe that these systems could change the world, fundamentally, within two years; in 10 years, all bets are off. Without a clear plan for a federal response, a moratorium would give us the worst of both worlds -- no ability for states to act, and no national policy as a backstop." He argued that instead of prescribing how companies should release their products, the government should work with AI companies to create a transparency standard for how companies share information about their practices and model capabilities. The opposition isn't limited to Democrats. There's been notable opposition to the AI moratorium from Republicans who argue the provision stomps on the GOP's traditional support for states' rights, even though it was crafted by prominent Republicans like Cruz and Rep. Jay Obernolte. These Republican critics include Sen. Josh Hawley (R-MO), who is concerned about states' rights and is working with Democrats to strip it from the bill. Blackburn also criticized the provision, arguing that states need to protect their citizens and creative industries from AI harms. Rep. Marjorie Taylor Greene (R-GA) even went so far as to say she would oppose the entire budget if the moratorium remains. Republicans like Cruz and Senate Majority Leader John Thune say they want a "light touch" approach to AI governance. Cruz also said in a statement that "every American deserves a voice in shaping" the future. However, a recent Pew Research survey found that most Americans seem to want more regulation around AI. The survey found that about 60% of U.S. adults and 56% of AI experts say they're more concerned that the U.S. government won't go far enough in regulating AI than they are that the government will go too far. Americans also largely aren't confident that the government will regulate AI effectively, and they are skeptical of industry efforts around responsible AI.
[3]
Congress might block state AI laws for a decade. Here's what it means.
A federal proposal that would ban states and local governments from regulating AI for 10 years could soon be signed into law, as Sen. Ted Cruz (R-TX) and other lawmakers work to secure its inclusion into a GOP megabill ahead of a key July 4 deadline. Those in favor - including OpenAI's Sam Altman, Anduril's Palmer Luckey, and a16z's Marc Andreessen - argue that a "patchwork" of AI regulation among states would stifle American innovation at a time when the race to beat China is heating up. Critics include most Democrats, several Republicans, Anthropic's CEO Dario Amodei, labor groups, AI safety nonprofits, and consumer rights advocates. They warn that this provision would block states from passing laws that protect consumers from AI harms and would effectively allow powerful AI firms to operate without much oversight or accountability. The so-called "AI moratorium" was squeezed into the budget reconciliation bill, nicknamed the "Big Beautiful Bill," in May. It is designed to prohibit states from "[enforcing] any law or regulation regulating [AI] models, [AI] systems, or automated decision systems" for a decade. Such a measure could preempt state AI laws that have already passed, such as California's AB 2013, which requires companies to reveal the data used to train AI systems, and Tennessee's ELVIS Act, which protects musicians and creators from AI-generated impersonations. The moratorium's reach extends far beyond these examples. Public Citizen has compiled a database of AI-related laws that could be affected by the moratorium. The database reveals that many states have passed laws that overlap, which could actually make it easier for AI companies to navigate the "patchwork." For example, Alabama, Arizona, California, Delaware, Hawaii, Indiana, Montana and Texas have criminalized or created civil liability for distributing deceptive AI-generated media meant to influence elections. The AI moratorium also threatens several noteworthy AI safety bills awaiting signature, including New York's RAISE Act, which would require large AI labs nationwide to publish thorough safety reports. Getting the moratorium into a budget bill has required some creative maneuvering. Because provisions in a budget bill must have a direct fiscal impact, Cruz revised the proposal in June to make compliance with the AI moratorium a condition for states to receive funds from the $42 billion Broadband Equity Access and Deployment (BEAD) program. Cruz then released another revision on Wednesday, which he says ties the requirement only to the new $500 million in BEAD funding included in the bill - a separate, additional pot of money. However, close examination of the revised text finds the language also threatens to pull already-obligated broadband funding from states that don't comply. Sen. Maria Cantwell (D-WA) criticized Cruz's reconciliation language on Thursday, claiming the provision "forces states receiving BEAD funding to choose between expanding broadband or protecting consumers from AI harms for ten years." What's next? Currently, the provision is at a standstill. Cruz's initial revision passed the procedural review earlier this week, which meant that the AI moratorium would be included in the final bill. However, reporting today from Punchbowl News and Bloomberg suggest that talks have reopened, and conversations on the AI moratorium's language are ongoing. Sources familiar with the matter tell TechCrunch they expect the Senate to begin heavy debate this week on amendments to the budget, including one that would strike the AI moratorium. That will be followed by a vote-a-rama - a series of rapid votes on the full slate of amendments. Chris Lehane, chief global affairs officer at OpenAI, said in a LinkedIn post that the "current patchwork approach to regulating AI isn't working and will continue to worsen if we stay on this path." He said this would have "serious implications" for the U.S. as it races to establish AI dominance over China. "While not someone I'd typically quote, Vladimir Putin has said that whoever prevails will determine the direction of the world going forward," Lehane wrote. OpenAI CEO Sam Altman shared similar sentiments this week during a live recording of the tech podcast Hard Fork. He said while he believes some adaptive regulation that addresses the biggest existential risks of AI would be good, "a patchwork across the states would probably be a real mess and very difficult to offer services under." Altman also questioned whether policymakers were equipped to handle regulating AI when the technology moves so quickly. "I worry that if...we kick off a three-year process to write something that's very detailed and covers a lot of cases, the technology will just move very quickly," he said. But a closer look at existing state laws tells a different story. Most state AI laws that exist today aren't far-reaching; they focus on protecting consumers and individuals from specific harms, like deepfakes, fraud, discrimination, and privacy violations. They target the use of AI in contexts like hiring, housing, credit, healthcare, and elections, and include disclosure requirements and algorithmic bias safeguards. TechCrunch has asked Lehane and other members of OpenAI's team if they could name any current state laws that have hindered the tech giant's ability to progress its technology and release new models. We also asked why navigating different state laws would be considered too complex, given OpenAI's progress on technologies that may automate a wide range of white-collar jobs in the coming years. TechCrunch asked similar questions of Meta, Google, Amazon, and Apple, but has not received any answers. The case against preemption "The patchwork argument is something that we have heard since the beginning of consumer advocacy time," Emily Peterson-Cassin, corporate power director at internet activist group Demand Progress, told TechCrunch. "But the fact is that companies comply with different state regulations all the time. The most powerful companies in the world? Yes. Yes, you can." Opponents and cynics alike say the AI moratorium isn't about innovation - it's about sidestepping oversight. While many states have passed regulation around AI, Congress, which moves notoriously slowly, has passed zero laws regulating AI. "If the federal government wants to pass strong AI safety legislation, and then preempt the states' ability to do that, I'd be the first to be very excited about that," said Nathan Calvin, VP of state affairs at the nonprofit Encode - which has sponsored several state AI safety bills - in an interview. "Instead, [the AI moratorium] takes away all leverage, and any ability, to force AI companies to come to the negotiating table." One of the loudest critics of the proposal is Anthropic CEO Dario Amodei. In an opinion piece for The New York Times, Amodei said "a 10-year moratorium is far too blunt an instrument." "AI is advancing too head-spinningly fast," he wrote. "I believe that these systems could change the world, fundamentally, within two years; in 10 years, all bets are off. Without a clear plan for a federal response, a moratorium would give us the worst of both worlds -- no ability for states to act, and no national policy as a backstop." He argued that instead of prescribing how companies should release their products, the government should work with AI companies to create a transparency standard for how companies share information about their practices and model capabilities. The opposition isn't limited to Democrats. There's been notable opposition to the AI moratorium from Republicans who argue the provision stomps on the GOP's traditional support for states' rights, even though it was crafted by prominent Republicans like Cruz and Rep. Jay Obernolte. These Republican critics include Senator Josh Hawley (R-MO) who is concerned about states' rights and is working with Democrats to strip it from the bill. Senator Marsha Blackburn (R-TN) also criticized the provision, arguing that states need to protect their citizens and creative industries from AI harms. Rep. Marjorie Taylor Greene (R-GA) even went so far as to say she would oppose the entire budget if the moratorium remains. What do Americans want? Republicans like Cruz and Senate Majority Leader John Thune say they want a "light touch" approach to AI governance. Cruz also said in a statement that "every American deserves a voice in shaping" the future. However, a recent Pew Research survey found that most Americans seem to want more regulation around AI. The survey found that about 60% of U.S. adults and 56% of AI experts say they're more concerned that the U.S. government won't go far enough in regulating AI than they are that the government will go too far. Americans also largely aren't confident that the government will regulate AI effectively, and they are skeptical of industry efforts around responsible AI.
[4]
US senate removes controversial 'AI moratorium' from budget bill | TechCrunch
U.S. senators voted overwhelmingly on Tuesday to remove a controversial 10-year ban on states' abilities to regulate AI from the Trump administration's "Big Beautiful Bill," reports Axios. The provision to the reconciliation bill was introduced by Sen. Ted Cruz (R-TX). Many prominent Silicon Valley executives -- including OpenAI's Sam Altman, Anduril's Palmer Luckey, and a16z's Marc Andreessen -- were in favor of the so-called "AI moratorium," which they said would prevent states from forming an unworkable patchwork of regulation that could stifle AI innovation. Opposition to the provision became a bipartisan issue, as most Democrats and many Republicans warned that the ban on state regulation would harm consumers, and let powerful AI companies operate with little oversight. Critics also objected to Cruz's plan to tie compliance with federal broadband funding. After going back and forth over the provision, Sen. Marsha Blackburn (R-TN) on Monday offered an amendment to strip the provision alongside Sen. Maria Cantwell (D-WA). Blackburn originally opposed the provision, but she came to an agreement with Cruz over the weekend that shortened the proposed ban from ten years to five. She then pulled her support for the provision entirely on Monday.
[5]
Senator Blackburn Pulls Support for AI Moratorium in Trump's 'Big Beautiful Bill' Amid Backlash
As Congress races to pass President Donald Trump's "Big Beautiful Bill," it's also sprinting to placate the many haters of the bill's "AI moratorium" provision which originally required a 10-year pause on state AI regulations. The provision, which was championed by White House AI czar and venture capitalist David Sacks, has proved remarkably unpopular with a diverse contingent of lawmakers ranging from 40 state attorneys general to the ultra-MAGA Representative Marjorie Taylor Greene. Sunday night, Senator Marsha Blackburn and Senator Ted Cruz announced a new version of the AI moratorium, knocking the pause from a full decade down to five years and adding a variety of carve-outs. But after critics attacked the watered-down version of the bill as a "get-out-of-jail free card" for Big Tech, Blackburn reversed course Monday evening. "While I appreciate Chairman Cruz's efforts to find acceptable language that allows states to protect their citizens from the abuses of AI, the current language is not acceptable to those who need these protections the most," Blackburn said in a statement to WIRED. "This provision could allow Big Tech to continue to exploit kids, creators, and conservatives. Until Congress passes federally preemptive legislation like the Kids Online Safety Act and an online privacy framework, we can't block states from making laws that protect their citizens." For those keeping track at home, Blackburn initially opposed the moratorium, then worked with Cruz on the five-year version of the provision, then changed her mind again to oppose her own compromised version of the law. She has historically championed regulations that protect the music industry, which is a major economic player in her home state of Tennessee. Last year, Tennessee passed a law to stop AI deepfakes of music artists. Her proposed AI provision included an exemption for this kind of law, which expands the legal right to protect one's likeness from commercial exploitation. The version of the moratorium she and Cruz proposed on Sunday also had carve-outs for state laws dealing with "unfair or deceptive acts or practices, child online safety, child sexual abuse material, rights of publicity, protection of a person's name, image, voice, or likeness." Despite these carve-outs, the new AI provision received fierce opposition from a wide array of organizations and individuals, ranging from the International Longshore & Warehouse Union ("dangerous federal overreach") to Steve Bannon ("they'll get all their dirty work done in the first five years.") The moratorium's carve-out language comes with a caveat that the exempted state laws cannot place "undue or disproportionate burden" on AI systems or "automated decision systems." With AI and algorithmic feeds embedded in social platforms, critics like Senator Maria Cantwell see the provision's language as creating "a brand-new shield against litigation and state regulation." Many advocacy groups and legal experts who focus on these issues, including kid safety rules, say that the new AI provision remains incredibly damaging. Danny Weiss, the chief advocacy officer at the nonprofit Common Sense Media, says that this version is still "extremely sweeping" and "could affect almost every effort to regulate tech with regards to safety" because of the undue burden shield. JB Branch, an advocate for consumer rights nonprofit Public Citizen, called the updated moratorium "a clever Trojan horse designed to wipe out state protections while pretending to preserve them" in a statement, and argued that the undue burden language rendered the carve-outs "meaningless." On Monday, Cantwell and Senator Ed Markey introduced an amendment to remove the AI moratorium from the bill altogether, condemning the version proposed Sunday evening as "a wolf in sheep's clothing," according to a statement from Markey. "The language still allows the Trump administration to use federal broadband funding as a weapon against the states and still prevents states from protecting children online from Big Tech's predatory behavior," he said. (The moratorium ties access to funding from the Broadband Equity, Access, and Deployment program to compliance with the five-year pause.) The Trump Administration has urged Congress to vote on the Big Beautiful Bill before the break for the Fourth of July holiday. It's unclear when this amendment will be voted on, but it may be soon -- and it may have a supporter in Blackburn.
[6]
Congress Dropped a Plan to Block State AI Rules. Why That Matters for Consumers
Expertise Artificial intelligence, home energy, heating and cooling, home technology. After months of debate, a plan in Congress to block states from regulating artificial intelligence was pulled from the big federal budget bill this week. The proposed 10-year moratorium would have prevented states from enforcing rules and laws on AI if the state accepted federal funding for broadband access. The issue exposed divides among technology experts and politicians, with some Senate Republicans joining Democrats in opposing the move. The Senate eventually voted 99-1 to remove the proposal from the bill, which also includes the extension of the 2017 federal tax cuts and cuts to services like Medicaid and SNAP. Congressional Republican leaders have said they want to have the measure on President Donald Trump's desk by July 4. Tech companies and many Congressional Republicans supported the moratorium, saying it would prevent a "patchwork" of rules and regulations across states and local governments that could hinder the development of AI -- especially in the context of competition with China. Critics, including consumer advocates, said states should have a free hand to protect people from potential issues with the fast-growing technology. "The Senate came together tonight to say that we can't just run over good state consumer protection laws," Sen. Maria Cantwell, a Washington Democrat, said in a statement. "States can fight robocalls, deepfakes and provide safe autonomous vehicle laws. This also allows us to work together nationally to provide a new federal framework on artificial intelligence that accelerates US leadership in AI while still protecting consumers." Despite the moratorium being pulled from this bill, the debate over how the government can appropriately balance consumer protection and supporting technology innovation will likely continue. "There have been a lot of discussions at the state level, and I would think that it's important for us to approach this problem at multiple levels," said Anjana Susarla, a professor at Michigan State University who studies AI. "We could approach it at the national level. We can approach it at the state level, too. I think we need both." The proposed moratorium would have barred states from enforcing any regulation, including those already on the books. The exceptions are rules and laws that make things easier for AI development and those that apply the same standards to non-AI models and systems that do similar things. These kinds of regulations are already starting to pop up. The biggest focus is not in the US, but in Europe, where the European Union has already implemented standards for AI. But states are starting to get in on the action. Colorado passed a set of consumer protections last year, set to go into effect in 2026. California adopted more than a dozen AI-related laws last year. Other states have laws and regulations that often deal with specific issues such as deepfakes or require AI developers to publish information about their training data. At the local level, some regulations also address potential employment discrimination if AI systems are used in hiring. "States are all over the map when it comes to what they want to regulate in AI," said Arsen Kourinian, a partner at the law firm Mayer Brown. So far in 2025, state lawmakers have introduced at least 550 proposals around AI, according to the National Conference of State Legislatures. In the House committee hearing last month, Rep. Jay Obernolte, a Republican from California, signaled a desire to get ahead of more state-level regulation. "We have a limited amount of legislative runway to be able to get that problem solved before the states get too far ahead," he said. Read more: AI Essentials: 29 Ways to Make Gen AI Work for You, According to Our Experts While some states have laws on the books, not all of them have gone into effect or seen any enforcement. That limits the potential short-term impact of a moratorium, said Cobun Zweifel-Keegan, managing director in Washington for IAPP. "There isn't really any enforcement yet." A moratorium would likely deter state legislators and policymakers from developing and proposing new regulations, Zweifel-Keegan said. "The federal government would become the primary and potentially sole regulator around AI systems," he said. AI developers have asked for any guardrails placed on their work to be consistent and streamlined. "We need, as an industry and as a country, one clear federal standard, whatever it may be," Alexandr Wang, founder and CEO of the data company Scale AI, told lawmakers during an April hearing. "But we need one, we need clarity as to one federal standard and have preemption to prevent this outcome where you have 50 different standards." During a Senate Commerce Committee hearing in May, OpenAI CEO Sam Altman told Sen. Ted Cruz, a Republican from Texas, that an EU-style regulatory system "would be disastrous" for the industry. Altman suggested instead that the industry develop its own standards. Asked by Sen. Brian Schatz, a Democrat from Hawaii, if industry self-regulation is enough at the moment, Altman said he thought some guardrails would be good, but, "It's easy for it to go too far. As I have learned more about how the world works, I am more afraid that it could go too far and have really bad consequences." (Disclosure: Ziff Davis, parent company of CNET, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.) Not all AI companies are backing a moratorium, however. In a New York Times op-ed, Anthropic CEO Dario Amodei called it "far too blunt an instrument," saying the federal government should create transparency standards for AI companies instead. "Having this national transparency standard would help not only the public but also Congress understand how the technology is developing, so that lawmakers can decide whether further government action is needed." Concerns from companies, both the developers that create AI systems and the "deployers" who use them in interactions with consumers, often stem from fears that states will mandate significant work such as impact assessments or transparency notices before a product is released, Kourinian said. Consumer advocates have said more regulations are needed and hampering the ability of states could hurt the privacy and safety of users. A moratorium on specific state rules and laws could result in more consumer protection issues being dealt with in court or by state attorneys general, Kourinian said. Existing laws around unfair and deceptive practices that are not specific to AI would still apply. "Time will tell how judges will interpret those issues," he said. Susarla said the pervasiveness of AI across industries means states might be able to regulate issues such as privacy and transparency more broadly, without focusing on the technology. But a moratorium on AI regulation could lead to such policies being tied up in lawsuits. "It has to be some kind of balance between 'we don't want to stop innovation,' but on the other hand, we also need to recognize that there can be real consequences," she said. Much policy around the governance of AI systems does happen because of those so-called technology-agnostic rules and laws, Zweifel-Keegan said. "It's worth also remembering that there are a lot of existing laws and there is a potential to make new laws that don't trigger the moratorium but do apply to AI systems as long as they apply to other systems," he said.
[7]
Trump's big, revised bill will slash AI funding for states that regulate AI
Senators add exemptions for state laws targeting unfair or deceptive practices and child sexual abuse material. The Trump administration's tax bill -- also called its "big, beautiful bill," which is facing a vote today -- includes a rule that would prevent states from enforcing their own AI legislation for five years, and would withhold up to $500 million in funding for AI infrastructure if states don't comply. Over the weekend, senators also added exemptions for state laws targeting unfair or deceptive practices and child sexual abuse material (CSAM). The initial version of the rule -- which banned states from enforcing AI regulation for 10 years and made broadband internet funding dependent on states' compliance -- did not account for those cases. If passed, the rule would constitutionally prohibit states from enforcing AI legislation for five years and simultaneously put AI funding for states in limbo. It wouldn't only impact in-progress legislation; laws that states have already passed would stay intact in writing but would effectively be rendered useless, lest states want to put their AI funding on the line. Also: What 'OpenAI for Government' means for US AI policy In practice, this would effectively create a patchwork imbalance across the country: Some states would have thorough legislation but no funding to advance AI safely, while others have no regulation but plenty of funding to keep up in the race. "State and local governments should have the right to protect their residents against harmful technology and hold the companies responsible to account," said Jonathan Walter, a senior policy adviser at The Leadership Conference's Center for Civil Rights and Technology. The administration is due to release its AI policy on July 22. In the meantime, the country is effectively flying blind, which has prompted several states to introduce their own AI bills. Under the Biden administration, which took some steps to regulate AI, states were already introducing AI legislation as the technology evolved rapidly into the unknown. Walter added that the vagueness of the ban's language could block states' oversight of non-AI-powered automation as well, including "insurance algorithms, autonomous vehicle systems, and models that determine how much residents pay for their utilities." "The main issue here is that there are already real, concrete harms from AI, and this legislation is going to take the brakes away from states without replacing it with anything at all," said Chas Ballew, CEO of AI agent provider Conveyor and a former Pentagon regulatory attorney. By preventing states from enforcing individual AI policy when federal regulation is still a big question mark, the Trump administration opens the door for AI companies to accelerate without any checks or balances -- what Ballew called a "dangerous regulatory vacuum" that would give companies "a decade-long free pass to deploy potentially harmful AI systems without oversight." Given how rapidly generative AI has evolved just since ChatGPT's launch in 2022, a decade is eons in technological terms. President Trump's second term thus far doesn't suggest AI safety is a priority for federal regulation. Since January, the Trump administration has overridden safety initiatives and testing partnerships put in place by the Biden administration, shrunken and renamed the US AI Safety Institute the "pro-innovation, pro-science" US Center for AI Standards and Innovation, and cut funding for AI research. Also: AI leaders must take a tight grip on regulatory, geopolitical, and interpersonal concerns "Even if President Trump met his own deadline for a comprehensive AI policy, it's unlikely that it will seriously address harms from faulty and discriminatory AI systems," Walter said. AI systems used for HR tech, hiring, and financial applications like determining mortgage rates have been shown to act with bias toward marginalized groups and can display racism. Understandably, AI companies have expressed a preference for federal regulation over individual state laws, which would make maintaining compliant models and products easier than trying to abide by patchwork legislation. But in some cases, states may need to set their own regulations for AI, even with a federal foundation in place. "The differences between states with respect to AI regulation reflect the different approaches states have to the underlying issues, like employment law, consumer protection laws, privacy laws, and civil rights," Ballew points out. "AI regulation needs to be incorporated into these existing legal schemes." He added that it's wise for states to have "a diversity of regulatory schemes," as it "promotes accountability, because state and local officials are closest to the people affected by these laws." Also: Anthropic's new AI models for classified info are already in use by US gov The bill passed the House of Representatives with the moratorium included, to the displeasure of some Republican representatives who would prefer their states have a say in how they protect their rights, jobs, and privacy in the face of rapidly expanding AI. It's now awaiting a vote in the Senate; as of Thursday, the Senate parliamentarian asked Republicans to rewrite the moratorium to clarify it won't impact the existing $42.25 in broadband funding. Previous proposals withheld internet funds Broadband Equity, Access, and Deployment (BEAD) is a $42-billion program run by the National Telecommunications and Information Administration (NTIA) that helps states build infrastructure to expand high-speed internet access. Before it was revised, the Senate rule would have made all of that money, plus $500 million in new funding, contingent on states backing off their own AI laws.
[8]
Senate pits AI regulation against state funding
A now-revised proposal in Trump's bill would ban states from regulating AI for 5 years. Here's what it means. The Trump administration's tax bill -- also called its "big, beautiful bill," which is facing a vote today -- also includes a rule that would prevent states from enforcing their own AI legislation for five years, and would withhold $500 million in funding for AI infrastructure if states don't comply. Over the weekend, senators also added exemptions for state laws targeting unfair or deceptive practices and child sexual abuse material (CSAM). The initial version of the rule -- which banned states from enforcing AI regulation for 10 years and made broadband internet funding dependent on states' compliance -- did not account for those cases. If passed, the rule would constitutionally prohibit states from enforcing AI legislation for five years and simultaneously put AI funding for states in limbo. And it wouldn't only impact in-progress legislation; laws that states have already passed would stay intact in writing, but would effectively be rendered useless, lest states want to put their AI funding on the line. Also: What 'OpenAI for Government' means for US AI policy In practice, this would effectively create a patchwork imbalance across the country: some states would have thorough legislation but no funding to advance AI safely, while others have no regulation but plenty of funding to keep up in the race. "State and local governments should have the right to protect their residents against harmful technology and hold the companies responsible to account," said Jonathan Walter, a senior policy adviser at The Leadership Conference's Center for Civil Rights and Technology. The administration is due to release its AI policy on July 22. In the meantime, the country is effectively flying blind, which has prompted several states to introduce their own AI bills. Even under the Biden administration, which took some steps to regulate AI, states were already introducing AI legislation as the technology evolved rapidly into the unknown. Walter added that the vagueness of the ban's language could block states' oversight of non-AI-powered automation as well, including "insurance algorithms, autonomous vehicle systems, and models that determine how much residents pay for their utilities." "The main issue here is that there are already real, concrete harms from AI, and this legislation is going to take the brakes away from states without replacing it with anything at all," said Chas Ballew, CEO of AI agent provider Conveyor and a former Pentagon regulatory attorney. By preventing states from enforcing individual AI policy when federal regulation is still a big question mark, the Trump administration opens the door for AI companies to accelerate without any checks or balances -- what Ballew called a "dangerous regulatory vacuum" that would give companies "a decade-long free pass to deploy potentially harmful AI systems without oversight." Given how rapidly generative AI has evolved just since ChatGPT's launch in 2022, a decade is eons in technological terms. President Donald Trump's second term thus far doesn't suggest AI safety is a priority for federal regulation. Since January, the Trump administration has overridden safety initiatives and testing partnerships put in place by the Biden administration, shrunken and renamed the US AI Safety Institute the "pro-innovation, pro-science" US Center for AI Standards and Innovation, and cut funding for AI research. "Even if President Trump met his own deadline for a comprehensive AI policy, it's unlikely that it will seriously address harms from faulty and discriminatory AI systems," Walter said. AI systems used for HR tech, hiring, and financial applications like determining mortgage rates have been shown to act with bias towards marginalized groups and can display racism. Also: AI leaders must take a tight grip on regulatory, geopolitical, and interpersonal concerns Understandably, AI companies have expressed a preference for federal regulation over individual state laws, which would make maintaining compliant models and products easier than trying to abide by patchwork legislation. But in some cases, states may need to set their own regulations for AI, even with a federal foundation in place. "The differences between states with respect to AI regulation reflect the different approaches states have to the underlying issues, like employment law, consumer protection laws, privacy laws, and civil rights," Ballew points out. "AI regulation needs to be incorporated into these existing legal schemes." He added that it's wise for states to have "a diversity of regulatory schemes," as it "promotes accountability, because state and local officials are closest to the people affected by these laws." Also: Anthropic's new AI models for classified info are already in use by US gov The bill passed the House of Representatives with the moratorium included, to the displeasure of some Republican representatives who would prefer their states have a say in how they protect their rights, jobs, and privacy in the face of rapidly expanding AI. It's now awaiting a vote in the Senate; as of Thursday, the Senate parliamentarian asked Republicans to rewrite the moratorium to clarify it won't impact the existing $42.25 in broadband funding. Previous proposals withheld internet funds Broadband Equity, Access, and Deployment (BEAD) is a $42-billion program run by the National Telecommunications and Information Administration (NTIA) that helps states build infrastructure to expand high-speed internet access. Before it was revised, the Senate rule would have made all of that money, plus $500 million in new funding, contingent on states backing off their own AI laws.
[9]
How the Senate's ban on state AI regulation imperils internet access
A Senate rule is pitting state AI regulation against broadband funding. The Trump administration's tax bill -- also called its "big, beautiful bill" -- which rounds up key pieces of the president's agenda, also includes a rule that would prevent states from enforcing their own AI legislation for 10 years, if passed. After an initial budget hiccup, Republican senators successfully amended the rule to comply with budgetary requirements by adding that states trying to enforce AI regulations would not receive federal broadband funding. Here's why that matters. Broadband Equity, Access, and Deployment (BEAD) is a $42-billion program run by the National Telecommunications and Information Administration (NTIA) that helps states build infrastructure to expand high-speed internet access. The Senate rule makes all of that money, plus $500 million in new funding, contingent on states backing off their own AI laws. The issue is twofold: if passed, the rule would both constitutionally prohibit states from enforcing AI legislation and put often critical funding for internet access at risk. And it wouldn't only impact in-progress legislation. Laws that states have already passed would stay intact in writing, but would effectively be rendered useless, lest states want to put their broadband funding on the line. Also: What 'OpenAI for Government' means for US AI policy "States like New York, Texas, and Utah would all have to choose between protecting their residents against faulty AI and billions in funding to help expand broadband access across their state," Jonathan Walter, a senior policy adviser at The Leadership Conference's Center for Civil Rights and Technology, told ZDNET. Earlier this month, the New York State Senate passed the RAISE Act, a first-of-its-kind bill that would require larger AI companies to publish safety, security, and risk evaluations, disclose breaches and other incidents, and allow the state's attorney general to bring civil penalties against companies when they don't comply. Walter added that the vagueness of the ban's language could block states' oversight of non-AI-powered automation as well, including "insurance algorithms, autonomous vehicle systems, and models that determine how much residents pay for their utilities." The administration is due to release its AI policy on July 22. In the meantime, the country is effectively flying blind, which has prompted several states to introduce their own AI bills. Even under the Biden administration, which took some steps to regulate AI, states were already introducing AI legislation as the technology evolved rapidly into the unknown. "The main issue here is that there are already real, concrete harms from AI, and this legislation is going to take the brakes away from states without replacing it with anything at all," said Chas Ballew, CEO of AI agent provider Conveyor and a former Pentagon regulatory attorney. By preventing states from enforcing individual AI policy when federal regulation is still a big question mark, the Trump administration opens the door for AI companies to accelerate without any checks or balances -- what Ballew called a "dangerous regulatory vacuum" that would give companies "a decade-long free pass to deploy potentially harmful AI systems without oversight." Given how rapidly generative AI has evolved just since ChatGPT's launch in 2022, a decade is eons in technological terms. What's more, President Donald Trump's second term thus far doesn't suggest AI safety is a priority for federal regulation. Since January, the Trump administration has overridden safety initiatives and testing partnerships put in place by the Biden administration, shrunken and renamed the US AI Safety Institute the "pro-innovation, pro-science" US Center for AI Standards and Innovation, and cut funding for AI research. "Even if President Trump met his own deadline for a comprehensive AI policy, it's unlikely that it will seriously address harms from faulty and discriminatory AI systems," Walter said. AI systems used for HR tech, hiring, and financial applications like determining mortgage rates have been shown to act with bias towards marginalized groups and can display racism. Also: AI leaders must take a tight grip on regulatory, geopolitical, and interpersonal concerns Understandably, AI companies have expressed a preference for federal regulation over individual state laws, which would make maintaining compliant models and products easier than trying to abide by patchwork legislation. But in some cases, states may need to set their own regulations for AI, even with a federal foundation in place. "The differences between states with respect to AI regulation reflect the different approaches states have to the underlying issues, like employment law, consumer protection laws, privacy laws, and civil rights," Ballew points out. "AI regulation needs to be incorporated into these existing legal schemes." He added that it's wise for states to have "a diversity of regulatory schemes," as it "promotes accountability, because state and local officials are closest to the people affected by these laws." Also: Anthropic's new AI models for classified info are already in use by US gov The principles of federalism, like the Tenth Amendment reserving to states "the powers not delegated to the United States by the Constitution, nor prohibited by it to the States," and the idea of states as "laboratories of democracy" are based on the idea self-governance is good, and that too much top-down governance is counterproductive. The bill passed the House of Representatives with the moratorium included, to the displeasure of some Republican representatives who would prefer their states have a say in how they protect their rights, jobs, and privacy in the face of rapidly expanding AI. It's now awaiting a vote in the Senate; as of Thursday, the Senate parliamentarian asked Republicans to rewrite the moratorium to clarify it won't impact the existing $42.25 in broadband funding. How would losing BEAD funding impact states if the moratorium passes as written? "This ban on state and local AI laws would allow NTIA to deobligate the $42.45 billion already obligated BEAD funding to states," Walter explained. "When NTIA reobligates the funding, the new AI Moratorium and Master Service Agreement conditions would apply. This creates a backdoor to apply new AI requirements to the entire $42.45 billion program, not just the new $500 million." "This will likely mean fewer people will end up getting access to high-quality, affordable broadband," he concluded. ZDNET will update this story as the Senate debate over the moratorium language continues.
[10]
Senate drops plan to ban state AI laws
The US Senate has voted overwhelmingly to remove a moratorium on states regulating AI systems from the Republican "big, beautiful bill." Legislators agreed by a margin of 99 to 1 to drop the controversial proposal during a protracted fight over the omnibus budget bill, which is still under debate. The vote followed failed attempts to revise the rule in a way that would placate holdouts, particularly Sen. Marsha Blackburn (R-TN), one of the moratorium's first opponents. Over the weekend, Blackburn struck a deal with Sen. Ted Cruz (R-TX) that would have cut the moratorium to five years and allowed states to continue enforcing AI laws that handled online child safety as well as individuals' names, images, and likenesses. But after a day of furious backlash from the populist right, driven primarily by MAGA internet powerhouses Steve Bannon and Mike Davis, Blackburn relented at the last minute -- and chose, instead, to attach her name to a Democrat-sponsored amendment that sought to remove the bill altogether.
[11]
Senate removes ban on state AI regulations from Trump's tax bill
States will be able to enact AI legislation again - but a federal plan remains unclear, and the clock is ticking. Until now, the Trump administration's tax bill -- also called its "big, beautiful bill," which passed in the Senate on Tuesday -- included a rule that would prevent states from enforcing their own AI legislation for five years, and would withhold up to $500 million in funding for AI infrastructure if states don't comply. On Tuesday, a day into a "vote-o-rama" that began Monday in an effort to pass Trump's tax bill before the July 4 holiday, the Senate voted 99 to one to remove the proposed moratorium on states' ability to regulate AI. The vote came just days after senators had amended the original proposal of a 10-year ban on enforcement to five years and added exemptions for state laws targeting unfair or deceptive practices and child sexual abuse material (CSAM). Also: OpenAI wants to trade gov't access to AI models for fewer regulations The initial version of the rule also made $42 billion in broadband internet funding dependent on states' compliance with the 10-year ban. The amended version only held $500 million in AI funding for ransom if states disobeyed. If passed, the rule would have prohibited states from enforcing AI legislation for five years and simultaneously put AI funding for states in limbo. It wouldn't have only affected in-progress legislation; laws that states had already passed would stay intact in writing but would effectively be rendered useless, lest states want to put their AI funding on the line. In practice, this would create a patchwork imbalance across the country: Some states would have thorough legislation but no funding to advance AI safely, while others have no regulation but plenty of funding to keep up in the race. Also: What 'OpenAI for Government' means for US AI policy "State and local governments should have the right to protect their residents against harmful technology and hold the companies responsible to account," said Jonathan Walter, a senior policy adviser at The Leadership Conference's Center for Civil Rights and Technology. Many advocates fought to get the ban removed from the tax bill and celebrated the news on Tuesday, including Adam Billen, vice president of public policy at Encode, a Washington, D.C.-based responsible AI organization. "40 state AGs, 14 governors. 260 state lawmakers from all 50 states, multiple 140+ org coalition letters we rallied, thousands of calls and emails from parents and constituents, and a few key Congressional champions later, and we have it nearly completely killed," he said in a LinkedIn post. "Even the provision's primary sponsors voted to strip it in the end." The administration is due to release its AI policy on July 22. In the meantime, the country is effectively flying blind, which has prompted several states to introduce their own AI bills. Under the Biden administration, which took some steps to regulate AI, states were already introducing AI legislation as the technology evolved rapidly into the unknown. Walter added that the vagueness of the ban's language could have block states' oversight of non-AI-powered automation as well, including "insurance algorithms, autonomous vehicle systems, and models that determine how much residents pay for their utilities." "The main issue here is that there are already real, concrete harms from AI, and this legislation [would] take the brakes away from states without replacing it with anything at all," said Chas Ballew, CEO of AI agent provider Conveyor and a former Pentagon regulatory attorney. By preventing states from enforcing individual AI policy when federal regulation is still a big question mark, the Trump administration would have opened the door for AI companies to accelerate without any checks or balances -- what Ballew called a "dangerous regulatory vacuum" that would give companies "a decade-long free pass to deploy potentially harmful AI systems without oversight." President Trump's second term thus far doesn't suggest AI safety is a priority for federal regulation. Since January, the Trump administration has overridden safety initiatives and testing partnerships put in place by the Biden administration, shrunken and renamed the US AI Safety Institute the "pro-innovation, pro-science" US Center for AI Standards and Innovation, and cut funding for AI research. Also: AI leaders must take a tight grip on regulatory, geopolitical, and interpersonal concerns "Even if President Trump met his own deadline for a comprehensive AI policy, it's unlikely that it will seriously address harms from faulty and discriminatory AI systems," Walter said. AI systems used for HR tech, hiring, and financial applications like determining mortgage rates have been shown to act with bias toward marginalized groups and can display racism. Understandably, AI companies have expressed a preference for federal regulation over individual state laws, which would make maintaining compliant models and products easier than trying to abide by patchwork legislation. But in some cases, states may need to set their own regulations for AI, even with a federal foundation in place. "The differences between states with respect to AI regulation reflect the different approaches states have to the underlying issues, like employment law, consumer protection laws, privacy laws, and civil rights," Ballew points out. "AI regulation needs to be incorporated into these existing legal schemes." Also: Anthropic's new AI models for classified info are already in use by US gov He added that it's wise for states to have "a diversity of regulatory schemes," as it "promotes accountability, because state and local officials are closest to the people affected by these laws." Broadband Equity, Access, and Deployment (BEAD) is a $42-billion program run by the National Telecommunications and Information Administration (NTIA) that helps states build infrastructure to expand high-speed internet access. Before it was revised, the Senate rule would have made all of that money, plus $500 million in new funding, contingent on states backing off their own AI laws. Get the morning's top stories in your inbox each day with our Tech Today newsletter.
[12]
Senate axes proposal banning state AI regulation
States will be able to enact AI legislation again - but a federal plan remains unclear, and the clock is ticking. Until today, the Trump administration's tax bill -- also called its "big, beautiful bill," which is facing a vote today -- included a rule that would prevent states from enforcing their own AI legislation for five years, and would withhold up to $500 million in funding for AI infrastructure if states don't comply. On Tuesday, a day into a "vote-o-rama" that began Monday in an effort to pass Trump's tax bill before the July 4 holiday, the Senate voted 99 to one to remove the proposed moratorium on states' ability to regulate AI. The vote came just days after senators had amended the original proposal of a 10-year ban on enforcement to five years and added exemptions for state laws targeting unfair or deceptive practices and child sexual abuse material (CSAM). The initial version of the rule also made $42 billion in broadband internet funding dependent on states' compliance with the 10-year ban. The amended version only held $500 million in AI funding for ransom if states disobeyed. If passed, the rule would have prohibited states from enforcing AI legislation for five years and simultaneously put AI funding for states in limbo. It wouldn't have only impacted in-progress legislation; laws that states have already passed would stay intact in writing but would effectively be rendered useless, lest states want to put their AI funding on the line. Also: What 'OpenAI for Government' means for US AI policy In practice, this would effectively create a patchwork imbalance across the country: Some states would have thorough legislation but no funding to advance AI safely, while others have no regulation but plenty of funding to keep up in the race. "State and local governments should have the right to protect their residents against harmful technology and hold the companies responsible to account," said Jonathan Walter, a senior policy adviser at The Leadership Conference's Center for Civil Rights and Technology. Many advocates fought to get the ban removed from the tax bill and celebrated the news on Tuesday, including Adam Billen, vice president of public policy at Encode, a Washington, D.C.-based responsible AI organization. "40 state AGs, 14 governors. 260 state lawmakers from all 50 states, multiple 140+ org coalition letters we rallied, thousands of calls and emails from parents and constituents, and a few key Congressional champions later and we have it nearly completely killed," he said in a LinkedIn post. "Even the provisions's primary sponsors voted to strip it in the end." The administration is due to release its AI policy on July 22. In the meantime, the country is effectively flying blind, which has prompted several states to introduce their own AI bills. Under the Biden administration, which took some steps to regulate AI, states were already introducing AI legislation as the technology evolved rapidly into the unknown. Walter added that the vagueness of the ban's language could have block states' oversight of non-AI-powered automation as well, including "insurance algorithms, autonomous vehicle systems, and models that determine how much residents pay for their utilities." "The main issue here is that there are already real, concrete harms from AI, and this legislation [would] take the brakes away from states without replacing it with anything at all," said Chas Ballew, CEO of AI agent provider Conveyor and a former Pentagon regulatory attorney. By preventing states from enforcing individual AI policy when federal regulation is still a big question mark, the Trump administration would have opened the door for AI companies to accelerate without any checks or balances -- what Ballew called a "dangerous regulatory vacuum" that would give companies "a decade-long free pass to deploy potentially harmful AI systems without oversight." President Trump's second term thus far doesn't suggest AI safety is a priority for federal regulation. Since January, the Trump administration has overridden safety initiatives and testing partnerships put in place by the Biden administration, shrunken and renamed the US AI Safety Institute the "pro-innovation, pro-science" US Center for AI Standards and Innovation, and cut funding for AI research. Also: AI leaders must take a tight grip on regulatory, geopolitical, and interpersonal concerns "Even if President Trump met his own deadline for a comprehensive AI policy, it's unlikely that it will seriously address harms from faulty and discriminatory AI systems," Walter said. AI systems used for HR tech, hiring, and financial applications like determining mortgage rates have been shown to act with bias toward marginalized groups and can display racism. Understandably, AI companies have expressed a preference for federal regulation over individual state laws, which would make maintaining compliant models and products easier than trying to abide by patchwork legislation. But in some cases, states may need to set their own regulations for AI, even with a federal foundation in place. "The differences between states with respect to AI regulation reflect the different approaches states have to the underlying issues, like employment law, consumer protection laws, privacy laws, and civil rights," Ballew points out. "AI regulation needs to be incorporated into these existing legal schemes." Also: Anthropic's new AI models for classified info are already in use by US gov He added that it's wise for states to have "a diversity of regulatory schemes," as it "promotes accountability, because state and local officials are closest to the people affected by these laws." Broadband Equity, Access, and Deployment (BEAD) is a $42-billion program run by the National Telecommunications and Information Administration (NTIA) that helps states build infrastructure to expand high-speed internet access. Before it was revised, the Senate rule would have made all of that money, plus $500 million in new funding, contingent on states backing off their own AI laws.
[13]
Senate Removes Tax Bill Provision Limiting State AI Regulation
The Senate killed a controversial effort to prevent US states from regulating artificial intelligence, delivering a win for critics of the biggest tech companies after a compromise proposal collapsed. Senators voted 99-1 early Tuesday to strip the language out of President Donald Trump's signature tax legislation during a marathon session that began Monday and continued through the night. The overwhelming opposition came despite widespread support for the pause on state AI legislation from Trump administration officials and GOP allies in Silicon Valley.
[14]
Senate nixes Trump's ban on state AI rules
It took a tie-breaking vote from the Vice President JD Vance to pass Trump's budget reconciliation bill through the Senate on Tuesday, but a controversial section that would have barred states from regulating AI was struck down in a much clearer fashion. Lawmakers were definitive in their vote, with 99 voting to strip the state AI amendment and just a single Senator - Thom Tillis (R-NC) - voting to keep the moratorium in place. Even Ted Cruz (R-TX), who had wrangled last-minute concessions out of Marsha Blackburn (R-TN) that would have seen her support the state restriction, abandoned his position when Blackburn backed out. The rule was part of the House-passed version of Trump's "One Big Beautiful Bill" budget reconciliation act, and would have prevented US states from making any state-level laws governing AI systems, with a few exceptions. The moratorium was originally designed to last for a decade. Blackburn, whose home state of Tennessee is the hub of the US country music world, has faced pressure from Nashville to allow states to regulate AI, which creatives view as a threat to their livelihood. She struck a deal with Cruz over the weekend to support the state-level AI moratorium, but only if it was shortened to five years, and with exceptions for child safety and privacy rules. Their agreement had evaporated by Monday, with Blackburn reportedly abandoning the compromise and demanding a formal roll call vote on the proposal, forcing each Senator to go on record supporting or opposing the state-level AI ban - a political event so dangerous even Ted Cruz wouldn't hop a jet to avoid the consequences. With that matter struck from the bill with such a definitive vote, it's unlikely it'll show back up in the final bill - but it still could. Now that the Senate has passed its 940-page version of the bill, the matter goes back to the House for another round of voting. Both chambers will need to agree on a final version of the bill and pass that one along to President Trump to sign into law. Trump wants this all to happen by Friday so he can sign the bill before Congress goes on recess for the July 4th holiday, giving the House precious little time to wrangle over this one point. So the US will likely continue to have some degree of AI regulation, albeit fractured across 50 states. That's better than the federal government's current record of no comprehensive AI legislation. ®
[15]
Senators Came to Their Senses on AI Regulation Ban
Some sense has prevailed in the Senate -- a 99-1 vote against a provision in its huge tax and spending bill that would have banned state-level artificial-intelligence laws for the next 10 years. It's been just 944 dizzying days since ChatGPT was launched into the world -- imagine what might have happened over the next 3,653. A last-gasp effort to amend the bill, which included reducing 10 years to five, also failed. The new wording would have been more onerous than the original, decimating existing state laws on facial recognition and data privacy. New laws will need to tackle AI-triggered issues on discrimination, recruitment and mental health. The matter is simply too urgent to be left only in Washington's hands. Senators rightly saw through the moratorium as doing the bidding of big tech companies that want free rein to do as they please in the insatiable race to build and sell AI.
[16]
US Senate debates whether to adopt revised state AI regulation ban
WASHINGTON, June 30 (Reuters) - Two key U.S. Republican senators agreed to a revised federal moratorium on state regulation of artificial intelligence to five years and allow states to adopt rules on child online safety and protecting artists' image or likeliness. Senate Commerce Committee chair Ted Cruz originally proposed securing compliance by blocking states that regulate AI from a $42 billion broadband infrastructure fund as part of a broad tax and budget bill. A revised version released last week would only restrict states regulating AI form tapping a new $500 million fund to support AI infrastructure. Under a compromise announced Sunday by Senator Marsha Blackburn, a critic of the state AI regulatory moratorium, the proposed 10-year moratorium would be cut to five years and allow states to regulate issues like protecting artists' voices or child online safety if they do not impose an "undue or disproportionate burden" on AI. Tennessee passed a law last year dubbed the ELVIS Act to protect songwriters and performers from the use of AI to make unauthorized fake works in the image and voice of well-known artists. Texas approved legislation to bar AI use for the creation of child pornography or to encourage a person to commit physical self-harm or commit crime. It is not clear if the change will be enough to assuage concerns. On Friday, 17 Republican governors urged the Senate to drop the AI plan. "We cannot support a provision that takes away states' powers to protect our citizens. Let states function as the laboratories of democracy they were intended to be and allow state leaders to protect our people," said the governors led by Arkansas' Sarah Huckabee Sanders. U.S. Commerce Secretary Howard Lutnick voiced his support for the revised measure calling it a pragmatic compromise. "Congress should stand by the Cruz provision to keep America First in AI," Lutnick wrote on X. Congress has failed for years to pass any meaningful AI regulations or safety measures. Senate Maria Cantwell, the top Democrat on the Commerce Committee, said the Blackburn Cruz amendment "does nothing to protect kids or consumers. It's just another giveaway to tech companies." Cantwell said Lutnick could simply opt to strip states of internet funding if they did not agree to the moratorium. Reporting by David Shepardson; Editing by Chizu Nomiyama Our Standards: The Thomson Reuters Trust Principles., opens new tab Suggested Topics:Media & Telecom
[17]
US Senate rejects plan to stop states regulating AI
The US Senate has voted down a proposed 10-year ban on states regulating artificial intelligence models, ending a controversial plan supported by Big Tech companies. Senators voted by a margin of 99 to one in favour of an amendment to remove the wording from Donald Trump's flagship tax and spending legislation. The vote in the early hours of Tuesday morning came as part of a wider marathon voting session in the Senate over the US president's "big, beautiful bill". Proponents, including Big Tech companies, argued that the provision to restrict AI regulation was necessary to prevent a raft of inconsistent regional rules that could stifle innovation and lead the US to lose ground to China. But it had caused divisions within the Republican party. Commerce secretary Howard Lutnick had said on Monday that he supported a five-year curb on state regulation of AI as a compromise, saying in a message on social media site X that the US "must prioritise investment and innovation" if it was "serious about winning the AI race". The proposed moratorium had, however, been criticised by some Republican politicians, who raised concerns about banning states from overseeing a powerful technology with the potential to cause social and economic upheaval. AI safety campaigners also warned that relying on self-regulation could have disastrous societal consequences as Silicon Valley competes to release ever more powerful models. The provision had been included in the tax and spending bill as part of the US House of Representatives' version of the proposed legislation.
[18]
US Senate strikes AI regulation ban from Trump megabill
WASHINGTON, July 1 (Reuters) - The Republican-led U.S. Senate voted overwhelmingly on Tuesday to remove a 10-year federal moratorium on state regulation of artificial intelligence from President Trump's sweeping tax-cut and spending bill. Lawmakers voted 99-1 to strike the ban from the bill by adopting an amendment offered by Republican Senator Marsha Blackburn. The action came during a marathon session known as a "vote-a-rama," in which lawmakers offered numerous amendments to the legislation that Republicans eventually hope to pass. Republican Senator Thom Tillis was the lone lawmaker who voted to retain the ban. The Senate version of Trump's legislation would have only restricted states regulating AI from tapping a new $500 million fund to support AI infrastructure. Item 1 of 3 The U.S Capitol and an office are reflected in a window inside the Hart Senate Office Building as Republican lawmakers struggle to pass U.S. President Donald Trump's sweeping spending and tax bill, on Capitol Hill in Washington, D.C., U.S., July 1, 2025. REUTERS/Nathan Howard [1/3]The U.S Capitol and an office are reflected in a window inside the Hart Senate Office Building as Republican lawmakers struggle to pass U.S. President Donald Trump's sweeping spending and tax bill, on Capitol Hill in Washington, D.C., U.S., July 1, 2025. REUTERS/Nathan Howard Purchase Licensing Rights, opens new tab Major AI companies, including Alphabet's Google (GOOGL.O), opens new tab and OpenAI, have expressed support for Congress taking AI regulation out of the hands of states to free innovation from a panoply of differing requirements. Blackburn presented her amendment to strike the provision a day after agreeing to compromise language with Senate Commerce Committee chair Ted Cruz that would have cut the ban to five years and allowed states to regulate issues such as protecting artists' voices or child online safety if they did not impose an "undue or disproportionate burden" on AI. But Blackburn withdrew her support for the compromise before the amendment vote. "The current language is not acceptable to those who need these protections the most," the Tennessee Republican said in a statement. "Until Congress passes federally preemptive legislation like the Kids Online Safety Act and an online privacy framework, we can't block states from making laws that protect their citizens." Reporting by David Morgan, Editing by William Maclean and Alex Richardson Our Standards: The Thomson Reuters Trust Principles., opens new tab Suggested Topics:GovernmentPublic Policy
[19]
Ban on state AI laws set to pass, after exemption deals struck on musicians' rights and child safety
An amendment to Trump's tax bill would prevent states from legislating the AI industry for five years. If there's one thing the AI industry needs it's more regulation. Yet, soon individual US states might not have much say in what AI companies can and can't do thanks to Trump pleasing senators. That's right, an AI-friendly amendment to the president's tax legislation is on the road to approval -- despite concerns that its shoehorning is illegal. The would prevent states from legislating the AI industry for five years, Bloomberg reports. Only states that cooperate will be allowed to access some of the $500 million of funding for AI infrastructure and the like included included in the bill. Senator Marsha Blackburn (R-Tennessee) cleared the way for it, agreeing to a deal on Sunday with Senator Ted Cruz (R-Texas) that would exempt her home state's Ensuring Likeness Voice and Image Security (ELVIS) Act. Signed in early 2024, the ELVIS Act is meant to protect musicians from allowing AI to use their likeness and voice without permission. As part of the new deal, Cruz reduced the ban from ten to five years -- because five years of an unregulated AI industry surely won't cause any damage. A fresh draft of the amendment, obtained by Politico, also includes exemptions for "a law or regulation pertaining to unfair or deceptive acts or practices, child online safety, child sexual abuse material, rights of publicity, protection of a person's name, image, voice, or likeness and any necessary documentation for enforcement," as long as they don't place an "undue or disproportionate burden" on AI systems. An earlier version of the provision, that included the decade-long ban, passed the House in May. While Blackburn's decision will likely push it forward, Republican governors across the country have also voiced their disdain for the amendment. On Friday, 17 governors sent a letter asking for its removal (after sucking up about the rest of the tax bill, of course). They stated that it "threatens to undo all the work states have done to protect our citizens from the misuse of artificial intelligence."
[20]
Senate strikes AI provision from GOP bill after uproar from the states
WASHINGTON (AP) -- A proposal to deter states from regulating artificial intelligence for a decade was soundly defeated in the U.S. Senate on Tuesday, thwarting attempts to insert the measure into President Donald Trump's big bill of tax breaks and spending cuts. The Senate voted 99-1 to strike the AI provision for the legislation after weeks of criticism from both Republican and Democratic governors and state officials. Originally proposed as a 10-year ban on states doing anything to regulate AI, lawmakers later tied it to federal funding so that only states that backed off on AI regulations would be able to get subsidies for broadband internet or AI infrastructure. A last-ditch Republican effort to save the provision would have reduced the time frame to five years and sought to exempt some favored AI laws, such as those protecting children or country music performers from harmful AI tools. But that effort was abandoned when Sen. Marsha Blackburn, a Tennessee Republican, teamed up with Democratic Sen. Maria Cantwell of Washington on Monday night to introduce an amendment to strike the entire proposal. Voting on the amendment happened after 4 a.m. Wednesday as part of an overnight session as Republican leaders sought to secure support for the tax cut bill while fending off other proposed amendments, mostly from Democrats trying to defeat the package. Proponents of an AI moratorium had argued that a patchwork of state and local AI laws is hindering progress in the AI industry and the ability of U.S. firms to compete with China. Some prominent tech leaders welcomed the idea after Republican Sen. Ted Cruz of Texas, who leads the Senate Commerce committee, floated it at a hearing in May. But state and local lawmakers and AI safety advocates argued that the rule is a gift to an industry that wants to avoid accountability for its products. Led by Arkansas Gov. Sarah Huckabee Sanders, a majority of GOP governors sent a letter to Congress opposing it. Also appealing to lawmakers to strike the provision was a group of parents of children who have died as a result of online harms.
[21]
Senate votes against curbing state-level AI regulation
Yesterday, the senate was poised to restrict states' power to regulate AI. Now, the measure is dead in the water, with the Senate voting 99-1 to remove the provision. Are you also having a bit of whiplash? Here's what you need to know about the amendments rightful journey into the trash can of history. Senator Ted Cruz (R-Texas) had pushed for an amendment to Trump's tax bill that would ban states from regulating the AI industry for ten years -- if the state took AI infrastructure funding included in the aforementioned bill. A version of the provision passed the House in May. On Sunday, Senator Marsha Blackburn (R-Tennessee) agreed to a version which would reduce the moratorium to five years and include exceptions for regulations around child safety, deceptive acts and protection of a person's likeness, voice, name and more. The new provision also exempted Tennessee's Ensuring Likeness Voice and Image Security (ELVIS) Act, enacted last year. The ELVIS Act was passed to prevent AI from using musician's likeness and voice without their consent. Yet, backlash against the amendment continued from Republican and Democrat leaders, Politico reports. My day's end Blackburn had found sense and withdrew her support. The senate voted early Tuesday morning to nix the amendment, with even Cruz backing its removal.
[22]
How a GOP rift over tech regulation doomed a ban on state AI laws in Trump's tax bill
NEW YORK (AP) -- A controversial bid to deter states from regulating artificial intelligence for a decade seemed on its way to passing as the Republican tax cut and spending bill championed by President Donald Trump worked its way through the U.S. Senate. But as the bill neared a final vote, a relentless campaign against it by a constellation of conservatives -- including Republican governors, lawmakers, think tanks and social groups -- had been eroding support. One, conservative activist Mike Davis, appeared on the show of right-wing podcaster Steve Bannon, urging viewers to call their senators to reject this "AI amnesty" for "trillion-dollar Big Tech monopolists." He said he also texted with Trump directly, advising the president to stay neutral on the issue despite what Davis characterized as significant pressure from White House AI czar David Sacks, Commerce Secretary Howard Lutnick, Texas Sen. Ted Cruz and others. Conservatives passionate about getting rid of the provision had spent weeks fighting others in the party who favored the legislative moratorium because they saw it as essential for the country to compete against China in the race for AI dominance. The schism marked the latest and perhaps most noticeable split within the GOP about whether to let states continue to put guardrails on emerging technologies or minimize such interference. In the end, the advocates for guardrails won, revealing the enormous influence of a segment of the Republican Party that has come to distrust Big Tech. They believe states must remain free to protect their citizens against potential harms of the industry, whether from AI, social media or emerging technologies. "Tension in the conservative movement is palpable," said Adam Thierer of the R Street Institute, a conservative-leaning think tank. Thierer first proposed the idea of the AI moratorium last year. He noted "the animus surrounding Big Tech" among many Republicans. "That was the differentiating factor." Conservative v. conservative in a last-minute fight The Heritage Foundation, children's safety groups and Republican state lawmakers, governors and attorneys general all weighed in against the AI moratorium. Democrats, tech watchdogs and some tech companies opposed it, too. Sensing the moment was right on Monday night, Republican Sen. Marsha Blackburn of Tennessee, who opposed the AI provision and had attempted to water it down, teamed up with Democratic Sen. Maria Cantwell of Washington to suggest striking the entire proposal. By morning, the provision was removed in a 99-1 vote. The whirlwind demise of a provision that initially had the backing of House and Senate leadership and the White House disappointed other conservatives who felt it gave China, a main AI competitor, an advantage. Ryan Fournier, chairman of Students for Trump and chief marketing officer of the startup Uncensored AI, had supported the moratorium, writing on X that it "stops blue states like California and New York from handing our future to Communist China." "Republicans are that way ... I get it," he said in an interview, but added there needs to be "one set of rules, not 50" for AI innovation to be successful. AI advocates fear a patchwork of state rules Tech companies, tech trade groups, venture capitalists and multiple Trump administration figures had voiced their support for the provision that would have blocked states from passing their own AI regulations for years. They argued that in the absence of federal standards, letting the states take the lead would leave tech innovators mired in a confusing patchwork of rules. Lutnick, the commerce secretary, posted that the provision "makes sure American companies can develop cutting-edge tech for our military, infrastructure, and critical industries -- without interference from anti-innovation politicians." AI czar Sacks had also publicly supported the measure. After the Senate passed the bill without the AI provision, the White House responded to an inquiry for Sacks with the president's position, saying Trump "is fully supportive of the Senate-passed version of the One, Big, Beautiful Bill." Acknowledging defeat of his provision on the Senate floor, Cruz noted how pleased China, liberal politicians and "radical left-wing groups" would be to hear the news. But Blackburn pointed out that the federal government has failed to pass laws that address major concerns about AI, such as keeping children safe and securing copyright protections. "But you know who has passed it?" she said. "The states." Conservatives want to win the AI race, but disagree on how Conservatives distrusting Big Tech for what they see as social media companies stifling speech during the COVID-19 pandemic and surrounding elections said that tech companies shouldn't get a free pass, especially on something that carries as much risk as AI. Many who opposed the moratorium also brought up preserving states' rights, though proponents countered that AI issues transcend state borders and Congress has the power to regulate interstate commerce. Eric Lucero, a Republican state lawmaker in Minnesota, noted that many other industries already navigate different regulations established by both state and local jurisdictions. "I think everyone in the conservative movement agrees we need to beat China," said Daniel Cochrane from the Heritage Foundation. "I just think we have different prescriptions for doing so." Many argued that in the absence of federal legislation, states were best positioned to protect citizens from the potential harms of AI technology. "We have no idea what AI will be capable of in the next 10 years and giving it free rein and tying states hands is potentially dangerous," Rep. Marjorie Taylor Greene wrote on X. A call for federal rules Another Republican, Texas state Sen. Angela Paxton, wrote to Cruz and his counterpart, Sen. John Cornyn, urging them to remove the moratorium. She and other conservatives said some sort of federal standard could help clarify the landscape around AI and resolve some of the party's disagreements. But with the moratorium dead and Republicans holding only narrow majorities in both chambers of Congress, it's unclear whether they will be able to agree on a set of standards to guide the development of the burgeoning technology. In an email to The Associated Press, Paxton said she wants to see limited federal AI legislation "that sets some clear guardrails" around national security and interstate commerce, while leaving states free to address issues that affect their residents. "When it comes to technology as powerful and potentially dangerous as AI, we should be cautious about silencing state-level efforts to protect consumers and children," she said. ___ Associated Press writer Matt Brown in Washington contributed to this report.
[23]
Senate's New A.I. Moratorium Proposal Draws Fresh Criticism
Language in the chamber's spending bill says that state laws related to A.I. cannot pose an "undue or disproportionate burden" to tech companies. Two senior senators have reached a compromise on an amendment in the Republican economic policy bill that would block state laws on artificial intelligence. Senators Marsha Blackburn, Republican of Tennessee, and Ted Cruz, Republican of Texas, agreed late Sunday to decrease a proposed moratorium on state laws regulating the technology to five years from 10. But Democratic lawmakers and consumer protection groups on Monday criticized new language in the amendment that would create a higher standard for the enforcement of existing tech-related state laws, including those for online child safety and consumer protections. Any current laws related to A.I. cannot pose an "undue or disproportionate burden" to A.I. companies, according to the amendment. That broad language could allow tech companies -- almost all of which are developing A.I. -- to challenge existing state laws and regulations that apply to the use of a wide-range of automated technologies, legal experts said. Democrats and consumer protection groups warned that the new language could strip consumers of important protections provided by state laws aimed at warding off robocalls, regulating social media algorithms that steer users toward harmful content and prohibiting child sexual abuse imagery. Subscribe to The Times to read as many articles as you like.
[24]
Defeat of a 10-Year Ban on State A.I. Laws Is a Blow to Tech Industry
All but a handful of states have some laws regulating artificial intelligence. The defeat early Tuesday of a ban on state laws for artificial intelligence dealt a major blow to the tech industry on the verge of a policy victory. In a 99-1 vote, the Senate voted overwhelmingly to strike an amendment to the Republican economic policy package that would have imposed a decadelong moratorium on attempts to regulate A.I. by the states. The before-sunrise vote was a win for consumer groups and Democrats, who had argued for weeks against the provision that they feared would remove any threat of oversight for the powerful A.I. industry. "The Senate came together tonight to say that we can't just run over good state consumer protection laws," Senator Maria Cantwell, Democrat of Washington, said in a statement. "States can fight robocalls, deepfakes and provide safe autonomous vehicle laws." There are no federal laws regulating A.I. but states have enacted dozens of laws that strengthen consumer privacy, ban A.I.-generated child sexual abuse material and outlaw deepfake videos of political candidates. All but a handful of states have some laws regulating artificial intelligence in place. It is an area of deep interest: All 50 have introduced bills in the past year tied to the issue. The Senate's provision, introduced in the Senate by Senator Ted Cruz, Republican of Texas, sparked intense criticism by state attorneys general, child safety groups and consumer advocates who warned the amendment would give A.I. companies a clear runway to develop unproven and potentially dangerous technologies. Subscribe to The Times to read as many articles as you like.
[25]
Ted Cruz's Ban on AI Regulation Gets Last-Minute Boot From 'Big, Beautiful Bill'
Donald Trump's "Big, Beautiful Bill" is packed with all sorts of problematic policies, but the Senate did manage to successfully strip it of one: the 10-year ban on state-level artificial intelligence laws. During the Senate's "vote-a-rama," it voted 99 to 1 to adopt an amendment that will strike the restrictions on state-level regulations from the spending bill. The provision, which received a considerable amount of support from Big Tech firms and was championed by Texas Senator Ted Cruz, would have prevented any state that takes funding from a federal broadband fund from passing any legislation that would regulate AI within their borders. The amendment to strip that language out of the bill was proposed by Republican Senator Marsha Blackburn and received near-unanimous support, with Republican Thom Tillis standing as the lone "nay" vote. According to Reuters, Ted Cruz lamented the decision to kill the restrictions entirely, as he had proposed a compromise that would have resulted in a five-year ban and allowed states to regulate a narrow band of issues related to AI, like combating deepfakes of artists, but ultimately voted in favor of striking it entirely. But hey, everyone laments Ted Cruz so, call it even. It's unclear if Trump really cares about this particular provision personally (he opted not to weigh in on the issue publicly), but the folks he keeps around him seem pretty disappointed that the provision was killed. According to Bloomberg, White House technology advisers Michael Kratsios and David Sacks both supported the ban. Sacks, speaking recently at an AWS Summit event, warned that regulating AI now would be akin to “killing this thing in the cradle.†Commerce Secretary Howard Lutnick also backed the measure that initially appeared in the bill, claiming that it was important for national security to prevent states from passing their own AI legislation. He's called for a national-level, comprehensive AI regulation, but that is notably a thing that has not happened yet. Ditching the provision is a win for states, which are moving much faster on regulating AI than their federal counterparts. A total of 47 states have already proposed some form of AI-related legislation, and nearly 1 in 5 have already enacted those proposals into lawâ€"including several red states, which flies against the Republican narrative that it's the Californias of the world that are cramping AI's style. This also means states won't be held hostage if they access Broadband Equity, Access, and Deployment (BEAD) funding, which is designed to expand broadband internet access.
[26]
States retain power to regulate AI as Senate approves amendment led by Cantwell
In a decisive 99-1 vote early Tuesday morning, the U.S. Senate struck down a provision that would have banned states from regulating artificial intelligence for 10 years. The amendment, co-sponsored by Sen. Maria Cantwell, D-Wash., and Sen. Marsha Blackburn, R-Tenn., removed the controversial measure from a broader domestic policy bill. The original provision would have prohibited states from passing new AI laws or enforcing existing regulations on AI models and automated systems. The decision was a blow to tech investors and companies that lobbied aggressively for the effort, including Andreessen Horowitz and OpenAI, creator of ChatGPT. Supporters of the moratorium argued that it's too hard for startups to comply with various state laws, stifling innovation, The New York Times reported. Microsoft, which is based in Redmond, Wash., "did not support a full moratorium but did lobby for a compromise that preserved the rights of states to regulate certain areas, including protecting consumers from the use of AI in fraud," a spokesperson said by email. Seattle-based Amazon did not respond to a request for comment. Cantwell celebrated the victory. "The Senate came together tonight to say that we can't just run over good state consumer protection laws," Cantwell said on Tuesday. "States can fight robocalls and deepfakes and provide safe autonomous vehicle laws. This also allows us to work together nationally to provide a new federal framework on artificial intelligence that accelerates U.S. leadership in AI while still protecting consumers." The proposed ban would have forced states to choose between federal broadband funding and maintaining AI protections. Washington state alone expects to receive $1.2 billion from the federal Broadband Equity, Access, and Deployment, or BEAD, program. Sen. Ted Cruz, R-Texas, introduced the provision in his chamber; Speaker Mike Johnson, R-La., first pushed for the measure in the House. Last year, 24 states enacted AI-related legislation, Cantwell said in a June media event addressing the amendment. "Congress is threatening these laws, which will leave hundreds of millions of Americans vulnerable to AI harms by abolishing those state law protections," she said. Washington Attorney General Nick Brown joined 39 attorneys general in a letter protesting the proposed ban and called states "laboratories of democracy" for developing AI standards. "In Washington, we have so many tech industries here that are leading some of the innovative developments in this field. But we also have to recognize many of the potential harms that come from AI across our states and across this country," Brown said during the earlier event with Cantwell. Washington has an Artificial Intelligence Task Force that is responsible for researching and crafting regulations around the development and use of AI technologies, and has has already enacted several AI-related protections, including: The state considered additional measures this year, including requirements for AI training data disclosure and helping users identify AI-generated content, though these didn't pass. Brown criticized the 10-year timeframe as "silly," noting AI's rapid evolution and Congress's difficulty reaching policy agreements. "It's really important that states be given that opportunity" to regulate, he said.
[27]
In dramatic reversal, Senate kills AI-law moratorium
A GOP-led bid to stop states from regulating AI collapsed after a deal to save it fell through, handing Silicon Valley a painful defeat. The U.S. Senate voted 99-1 in the predawn hours Tuesday to strip from the sprawling tax and immigration bill a provision that would have blocked states from regulating artificial intelligence for the next decade. The provision's resounding defeat came after Sen. Marsha Blackburn (R-Tennessee) backed out of a compromise she had previously struck with Sen. Ted Cruz (R-Texas) that would have reduced the pause to five years from the original 10 and exempted some categories of AI regulations. Cruz, who had championed the moratorium, ended up joining Blackburn in voting against it, along with all of their colleagues except for Sen. Thom Tillis (R-North Carolina). The vote on the AI moratorium came after 4 a.m. Tuesday as part of a marathon "vote-a-rama" on a slew of proposed changes to the so-called One Big Beautiful Bill Act, which carries much of President Donald Trump's domestic agenda. It left the Republican-led, industry-backed push to roll back state AI laws on life support as voting continued on the broader bill. Republican leaders and tech trade groups have pitched the multiyear freeze on state regulations as necessary to pave the way for U.S. AI firms to innovate and outcompete their Chinese counterparts. Last month, the House of Representatives passed a version of the tax and immigration bill that included a 10-year ban on states passing or enforcing regulations on AI. The measure, which would have rolled back dozens of laws already on the books in states around the country and left the industry essentially unregulated, drew intense pushback from Democratic leaders, advocates and state lawmakers from both parties. In the Senate, Cruz led a behind-the-scenes effort to rework the provision to comply with procedural restrictions and gain the support of what figured to be a small handful of Republican holdouts, including Blackburn. That push appeared on Sunday to have paid off when Cruz and Blackburn announced an agreement on a revised version of the moratorium that would have pared it back to a five-year pause and exempted some categories of AI regulations. Those carveouts would have allowed states to continue enforcing and passing laws related to kids' online safety, child sexual abuse material and personal publicity rights -- causes Blackburn has championed. Blackburn's home state of Tennessee, for example, could have still enforced the Elvis Act, which aims to protect musicians from impostors using AI voice-cloning tools. But states such as New York, which recently passed an AI safety act, and Colorado, which passed a comprehensive AI bill last year, would likely have had to suspend those laws if they wanted to apply for a share of the new infrastructure money. The changes in that compromise effort did little to assuage the provision's other opponents, who pointed to language they said left it unclear just what kinds of AI laws states could and could not pass. "The way these provisions are written, they're very sweeping, and they would trip up almost any attempt to regulate the harmful use of AI," said Ed Wytkind, interim director of the AFL-CIO's technology institute, on Monday. The labor group was among those urging senators to vote against the compromise Monday. So were advocates of online child safety laws, which Blackburn has made a top legislative priority, and some influential conservatives who favor tougher regulations on tech giants. On Monday evening, Blackburn announced she no longer supported the compromise and would instead propose an amendment to remove the AI-law moratorium altogether. "While I appreciate Chairman Cruz's efforts to find acceptable language that allows states to protect their citizens from the abuses of AI, the current language is not acceptable to those who need these protections the most," Blackburn told The Washington Post in a statement Monday night. "Until Congress passes federally preemptive legislation like the Kids Online Safety Act and an online privacy framework, we can't block states from making laws that protect their citizens." With the moratorium also facing opposition from a few other Republicans and the entire Democratic caucus, Blackburn introduced an amendment to remove it on Tuesday morning that ultimately gained the support of everyone but Tillis, who was voting against every amendment to the bill. Opponents of the moratorium cheered the outcome Tuesday. "The Senate came together tonight to say that we can't just run over good state consumer protection laws," said Sen. Maria Cantwell (D-Washington). "States can fight robocalls, deepfakes and provide safe autonomous vehicle laws. This also allows us to work together nationally to provide a new federal framework on artificial intelligence that accelerates U.S. leadership in AI while still protecting consumers." Among those taking a victory lap Tuesday was Mike Davis, founder of the Article III project, a conservative judicial advocacy group, who opposed the moratorium. "Google and Meta had AI amnesty in the bag yesterday at 10 a.m.," Davis told The Post. "Then the Article III Project and Steve Bannon's War Room sprang into action. Sometimes feeling the heat makes people see the light. We are pleased 99 senators finally decided to side with kids and content creators over AI amnesty and Big Tech profits." Brad Carson, president of the nonprofit Americans for Responsible Innovation, said Tuesday that he hoped the landslide vote would end the push for a moratorium for good. "It threatened to strike so many laws important to voters that it mobilized policymakers, advocates, and people from across the country," he said. "Let this be a lesson to Congress -- freezing state AI laws without a serious replacement is a political nonstarter."
[28]
Federal Bill Would Ban State AI Laws for Next 10 Years
A federal proposal that would ban states and local governments from having their own regulations around AI for the next ten years is moving closer to being signed into law. Senator Ted Cruz and various Republican lawmakers are pushing to pass a major spending bill -- which President Trump has nicknamed the "big, beautiful bill" -- that includes a measure to stop states from creating their own rules on artificial intelligence. In May, the House added this provision to President Trump's full budget bill. According to the bill, no state "may enforce any law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decision systems" for 10 years, starting from the day the bill becomes law. While Congress has not passed any comprehensive laws on AI, many states have enacted their own regulations. For example, California passed several laws regarding the technology last year, including legislation banning political deepfakes. But a new federal proposal could override these state laws. According to a report by Reuters, many AI industry leaders support the federal law, saying it would help the U.S. stay ahead in innovation. Companies like Google and OpenAI believe having different rules in each state would slow progress and hurt the U.S.'s ability to compete with China. Critics of the federal proposal say it would stop states from passing laws that protect people from harm caused by AI. They argue it could reduce oversight and allow big AI companies to operate with little accountability. Sean O'Brien, president of the International Brotherhood of Teamsters, a union representing more than 1.3 million workers, says the bill "denies citizens the ability to make choices at the local or state level." "Pure and simple, it is a giveaway to Big Tech companies who reap economic value by continuing to operate in an unregulated void where their decisions and behavior are accountable to no one," O'Brien writes in a letter posted on X (the platform formerly known as Twitter). As lawmakers work to include the measure in a large GOP bill ahead of a July 4 deadline, Senator Marsha Blackburn said she reached a deal with Senator Ted Cruz on new language for the provision, according to The Hill. The new version would block states from regulating AI for five years if they want access to $500 million in AI infrastructure and deployment funding included in the bill. The original version, which Blackburn opposed, would have blocked state regulation for 10 years.
[29]
Senate votes to kill moratorium on AI state regulation
The proposed 10-year moratorium on state regulation of AI is dead, by overwhelming consensus from the Senate. In the early hours of Tuesday morning, the Senate voted 99-1 to remove controversial language from Republicans' budget legislation, referred to as the "Big Beautiful Bill." The proposed legislation was increasingly defanged as Congress tried to come up with a compromise to protect states' legislative independence. But ultimately, the Senate voted to remove the moratorium altogether. The 10-year ban of states' legislation of AI was contentious from the start. Big Tech companies like Meta, Google, Microsoft, and Amazon reportedly actively lobbied for passing the bill, saying patchwork state AI regulation would inhibit the U.S.' competitive edge against threats like China. Those opposed to the bill, including civil advocacy groups, AI safety researchers, and state attorneys general said it would be an unprecedented concession of power to Big Tech and would strip states of its ability to protect people from AI harms. "The Senate's overwhelming rejection of this Big Tech power grab underscores the massive bipartisan opposition to letting AI companies run amok," said Max Tegmark, MIT professor and president of the Future of Life Institute in a public statement. "The CEO's of these corporations have admitted they cannot control the very systems they're building, and yet they demand immunity from any meaningful oversight. This threatens families and jobs across America, and the Senate was wise to reject it." Sen. Marsha Blackburn of Tennessee introduced the amendment to remove the moratorium. Previously, she was working with Sen. Ted Cruz from Texas to soften the provision. As of Monday night, the proposal offered $500 million for federal broadband funding if states opted in to the moratorium. But in the end, Blackburn said they couldn't meet an agreement. In a statement to the press, Blackburn said, "While I appreciate Chairman Cruz's efforts to find acceptable language that allows states to protect their citizens from the abuses of AI, the current language is not acceptable to those who need these protections the most."
[30]
Senate Rejects 10-Year Ban on State AI Laws in Major Blow To Tech Companies
U.S. senators voted to strike down a proposed 10-year ban that would have blocked states and local governments from creating their own AI regulations, dealing a major setback to tech companies that claimed such legislation would hinder innovation. On Tuesday, lawmakers voted overwhelmingly to strip a controversial 10-year ban on state-level AI regulation from President Trump's "Big Beautiful Bill." AI companies had lobbied hard to keep the rule in Trump's spending bill that would stop states from making their own laws for the next decade. Earlier this week, it looked like they might have a policy win. But the U.S. Senate, led by Republicans, voted 99-1 to remove that rule from the bill. The vote came after Republican Senator Marsha Blackburn of Tennessee introduced an amendment to take it out. The day before the vote, Blackburn had worked with Senator Ted Cruz on a possible compromise. Their idea was to shorten the ban to five years and allow states to make some rules -- for example, to protect artists or children -- as long as the rules weren't too strict on AI companies. However, Blackburn later decided not to support the compromise and went ahead with the full amendment to remove the ban entirely. "Until Congress passes federally preemptive legislation like the Kids Online Safety Act and an online privacy framework, we can't block states from making laws that protect their citizens," Blackburn says in a statement. While there are no federal laws on AI, many states have enacted their own regulations. For example, California passed several laws regarding the technology last year, including legislation banning political deepfakes. But a new federal proposal could override these state laws. The news is a huge blow to AI companies, such as Google and OpenAI, who had supported the federal ban on state laws. They argued that having different rules in each state would slow progress and hurt the U.S.'s ability to compete with China and the ban would help the country stay ahead in AI innovation. The Trump administration supported the proposal as well. Commerce Secretary Howard Lutnick said the ban on state AI regulations was important for keeping the U.S. ahead in the AI race. "If we're serious about winning the AI race, we must prioritize investment and innovation," Lutnick writes on X (the platform formerly known as Twitter). But most Democrats, along with many Republicans, argued that blocking states from making their own rules would put consumers at risk and give large AI companies too much freedom.
[31]
New push for national AI rules likely after state ban fails
Why it matters: Congress' reluctance to set national AI rules for privacy, safety and intellectual property rights has left states to forge ahead with their own rules. Driving the news: Some senators fought until the last minute to keep an industry-backed 10-year ban on state-level regulation in the budget bill. Catch up quick: The Senate early Tuesday voted nearly unanimously to remove the proposed moratorium on state-level AI regulations from the budget bill. Friction point: President Trump's aides and advisers were split on the moratorium. Zoom out: More than 20 Democratic- and Republican-led states have passed AI regulation legislation. Yes, but: Congress has always had a hard time passing laws regulating tech, and the mood in Washington right now is favoring innovation over regulation What's next: The battle over a moratorium is not over, said Chris MacKenzie, vice president of communications for Americans for Responsible Innovation.
[32]
Senators Reject 10-Year Ban on State-Level AI Regulation
Lawmakers voted 99-1 in an overnight session to remove the provision by adopting an amendment tabled by Marsha Blackburn, Republican of Tennessee, who had earlier broken with her party over the issue. Companies such as OpenAI and Google had previously argued in support of blocking states from regulating AI -- so as to avoid what they said would be a patchwork of rules that could hamper innovation. But critics on both the left and the right said the AI moratorium, which had earlier been approved by the House, was an attempt to forestall any attempt to regulate new AI systems. Many also noted that Congress has not passed any significant new tech rules in decades -- meaning that a ban on state AI regulations might effectively mean no AI regulation at all. The version rejected by the Senate had earlier been reworded to meet budgetary rules, by making acceptance of funding from a $500 million infrastructure program conditional on accepting the moratorium. Blackburn, who is a highly vocal critic of Big Tech, led rightwing resistance to the moratorium. Although she briefly appeared to compromise with the bill's authors by coauthoring a watered-down five-year version of the ban, she ultimately ended up opposing it again altogether. "This provision could allow Big Tech to continue to exploit kids, creators, and conservatives," she told Wired. "Until Congress passes federally preemptive legislation like the Kids Online Safety Act and an online privacy framework, we can't block states from making laws that protect their citizens."
[33]
GOP senators remove state AI regulation moratorium from megabill
A controversial artificial intelligence provision was knocked out of the Republican megabill early Tuesday morning as it neared the finish line. The measure would have prevented states from regulating AI if they received broadband funds from a $500 million federal pot of money. The Senate vote was 99 to 1, with retiring Sen. Thom Tillis of North Carolina as the sole supporter. "The Senate came together tonight to say that we can't just run over good state consumer protection laws," Sen. Maria Cantwell of Washington, ranking Democrat on the Senate Commerce Panel, said in a statement. "This also allows us to work together nationally to provide a new federal framework on artificial intelligence that accelerates US leadership in AI while still protecting consumers." The provision stirred opposition from governors in both parties as well as state officials who warned the AI sector was attempting to get out of being accountable. Sen. Marsha Blackburn of Tennessee spearheaded efforts to remove it from the bill, and an earlier compromise with fellow GOP Sen. Ted Cruz of Texas had collapsed overnight. "This is a monumental win for Republican Governors, President Trump's one, big beautiful bill, and the American people," Arkansas Gov. Sarah Huckabee Sanders, an ex-Trump aide, wrote on X. She called it a win for states' rights. Still, proponents argued that uneven treatment of AI at the state level is preventing the U.S. from competing more efficiently with China, another leading power. Congress hasn't been able to pass a sweeping AI policy bill given lingering divisions over the strength of regulations.
[34]
Big Beautiful Bill AI provision unites an unexpected group of critics
As Senate Republicans rush to pass their hodgepodge tax and spending package -- the Big Beautiful Bill -- controversy has arisen around an unusual provision: a 10-year moratorium on states passing their own laws regulating artificial intelligence. Congress has been slow to pass any regulation on AI, a rapidly evolving technology, leaving states to write their own laws. Those state laws largely focus on preventing specific harms, like banning the use deepfake technology to create nonconsensual pornography, to mislead voters about specific issues or candidates or to mimic music artists' voices without permission. Some major companies that lead the U.S. AI industry have argued that a mix of state laws needlessly hamstrings the technology, especially as the U.S. seeks to compete with China. But a wide range of opposition -- including some prominent Republican lawmakers, child safety advocates and civil rights groups -- say states are a necessary bulwark against a dangerous technology that can cause unknown harms within the next decade. The Trump administration has been clear that it wants to loosen the reins on AI's expansion. During his first week in office, President Donald Trump signed an executive order to ease regulations on the technology and revoke "existing AI policies and directives that act as barriers to American AI innovation. And in February, Vice President JD Vance gave a speech at an AI summit in Paris that made clear that the Trump administration wanted to prioritize AI dominance over regulation. But a Pew Research Center study in April found that far more Americans who are not AI experts are more concerned about the risks of AI than the potential benefits. "Congress has just shown it can't do a lot in this space," Larry Norden, the vice president of the Elections and Government Program at the Brennan Center, a New York University-tied nonprofit that advocates for democratic issues, told NBC News. "To take the step to say we are not doing anything, and we're going to prevent the states from doing anything is, as far as I know, unprecedented. Especially given the stakes with this technology, it's really dangerous," Norden said. The provision in the omnibus package was introduced by the Senate Commerce Committee, chaired by Texas Republican Ted Cruz. Cruz's office deferred comment to the committee, which has issued an explainer saying that, under the proposed rule, states that want a share of a substantial federal investment in AI must "pause any enforcement of any state restrictions, as specified, related to AI models, AI systems, or automated decision systems for 10 years." On Friday, the Senate Parliamentarian said that while some provisions in the One Big Beautiful Bill Act are subject to a 60-vote threshold to determine whether or not they can remain in the bill, the AI moratorium is not one of them. Senate Republicans said they are aiming to bring the bill to a vote on Saturday. All Senate Democrats are expected to vote against the omnibus bill. But some Republicans have said they oppose the moratorium on states passing AI laws, including Sens. Josh Hawley of Arkansas, Jerry Moran of Kansas and Ron Johnson of Wisconsin. Georgia Rep. Marjorie Taylor Greene, a staunch Trump ally, posted on X earlier this month that, when she signed the House version of the bill, she didn't realize it would keep states from creating their own AI laws. "Full transparency, I did not know about this section," Greene wrote. "We have no idea what AI will be capable of in the next 10 years and giving it free rein and tying states' hands is potentially dangerous." Tennessee Sen. Marsha Blackburn, a Republican on the Commerce Committee, has said she opposes the 10-year moratorium. "We cannot prohibit states across the country from protecting Americans, including the vibrant creative community in Tennessee, from the harms of AI," she said in a statement provided to NBC News. "For decades, Congress has proven incapable of passing legislation to govern the virtual space and protect vulnerable individuals from being exploited by Big Tech." State lawmakers and attorneys general of both parties also oppose the AI provision. An open letter signed by 260 state legislators expressed their "strong opposition" to the moratorium. "Over the next decade, AI will raise some of the most important public policy questions of our time, and it is critical that state policymakers maintain the ability to respond," the letter reads. Similarly, 40 state attorneys general from both parties manifested their opposition to the provision in a letter to Congress. "The impact of such a broad moratorium would be sweeping and wholly destructive of reasonable state efforts to prevent known harms associated with AI," they wrote. A Brennan Center analysis found that the moratorium would lead to 149 existing state laws being overturned. "State regulators are trying to enforce the law to protect their citizens, and they have enacted common sense regulation that's trying to protect the worst kinds of harms that are surfacing up to them from their constituents," Sarah Meyers West, the co-executive director of the AI Now Institute, a nonprofit that seeks to shape AI to benefit the public, told NBC News. "They're saying that we need to wait 10 years before protecting people from AI abuses. These things are live. They're affecting people right now," she said. AI and tech companies like Google and Microsoft have argued that the moratorium is necessary to keep the industry competitive with China. "There's growing recognition that the current patchwork approach to regulating AI isn't working and will continue to worsen if we stay on this path," OpenAI's chief global affairs officer, Chris Lehane, wrote on LinkedIn. "While not someone I'd typically quote, Vladimir Putin has said that whoever prevails will determine the direction of the world going forward." "We cannot afford to wake up to a future where 50 different states have enacted 50 conflicting approaches to AI safety and security," Fred Humphries, Microsoft's corporate vice president of U.S. government affairs, said in an emailed statement The pro-business lobby Chamber of Commerce released a letter, signed by industry groups like the Independent Petroleum Association of America and the Meat Institute, in support of the moratorium. "More than 1,000 AI-related bills have already been introduced at the state and local level this year. Without a federal moratorium, there will be a growing patchwork of state and local laws that will significantly limit AI development and deployment," they wrote. In opposition, a diverse set of 60 civil rights organizations, ranging from the American Civil Liberties Union to digital rights groups to the NAACP, have signed their own open letter arguing for states to pass their own AI laws. "The moratorium could inhibit state enforcement of civil rights laws that already prohibit algorithmic discrimination, impact consumer protection laws by limiting the ability of both consumers and state attorneys general to seek recourse against bad actors, and completely eliminate consumer privacy laws," the letter reads. The nonprofit National Center on Sexual Exploitation opposed the moratorium on Tuesday, especially highlighting how AI has been used to sexually exploit minors. AI technology is already being used to generate child sex abuse material and to groom and extort minors, said Haley McNamara, the group's senior vice president of strategic initiatives and programs. "The AI moratorium in the budget bill is a Trojan horse that will end state efforts to rein in sexual exploitation and other harms caused by artificial intelligence. This provision is extremely reckless, and if passed, will lead to further weaponization of AI for sexual exploitation," McNamara said.
[35]
US Senate says no to deregulating AI in 99-1 vote
US Senators voted against a 10-year ban on regulating AI at the state level. US senators defeated a proposal that would ban states from regulating artificial intelligence (AI) companies. The US Senate voted 99-1 on Tuesday to get rid of the provision from President Donald Trump's big bill of tax breaks that would've banned any AI regulation for 10 years at the state level. Under the original proposal, states that wanted federal AI investment would have to "pause any enforcement of any state restrictions, as specified, related to AI models, AI systems, or automated decision systems" for a decade. Republicans tried to save the provision by bringing it down to five years but that was later abandoned when Republican senators Edward Markey, Maria Cantwell and Marsha Blackburn introduced a late-night motion to scrap the entire proposal. State lawmakers and AI safety advocates argued that the rule is a gift to an industry that wants to avoid accountability for its products. "Congress will not sell out our kids and local communities in order to pad the pockets of Big Tech billionaires," Senator Markey said in a statement after the vote, noting that the provision to strip AI regulation is "dangerous". Max Tegmark, president of the Future of Life Institute, said in a statement that the "overwhelming rejection" to the amendment "underscores the massive bipartisan opposition to letting AI companies run amok". "The CEO's of these corporations have admitted they cannot control the very systems they're building, and yet they demand immunity from any meaningful oversight," he wrote. Those in President Trump's camp like argue that a patchwork of state and local AI laws could hinder the country's progress in the industry and could hurt its capability of competing with China. Earlier this year, a report from Stanford University found that the US is still in the lead, followed closely by China, in the global race to become an AI leader. World leaders say that winning the race is critical to national security, developments in health, business and technology. Big Tech companies are split on how far AI regulation should go and who should be enforcing it. OpenAI, the parent company of ChatGPT, said in its submission for the US AI Action Plan that it favours a "regulatory strategy that ensures the freedom to innovate," which would include "voluntary partnership" between government and the private sector. Google advised lawmakers to "preempt a chaotic patchwork of state-level rules on frontier AI development," by focusing on the existing regulations that are already in place. In its submission, Meta quoted US Vice President JD Vance who said earlier this year that "excessive regulation of the AI sector could kill a transformative industry just as its taking off". Meta specified that it considers regulations that impose restrictions on AI models "based on obsolete measurements," or impose "onerous" reporting or testing would "impede innovation in the US". Along with changes to state AI legislation, Meta also asked the Trump administration to "reduce barriers to AI infrastructure investment," such as permitting barriers for data centre investments. In one of the first acts of his second term, Trump released an executive order that called for the end of "AI policies and directives that act as barriers to American AI innovation," so the country can retain "global leadership". Trump also removed a 2023 executive order from former president Joe Biden that increased the federal government's capacity to "regulate, govern and support responsible use of AI".
[36]
A 'big beautiful bill' provision to prevent states from regulating AI for 10 years got nixed by the US Senate
A revised proposal that changed it to five years was quickly introduced and then abandoned. AI has been taking over workspaces and evolving much faster than the gears of government typically grind, so regulation hasn't caught up to its capabilities. While fights are happening all over the world to address copyright concerns, the US federal government has recently been intent on a laissez-faire approach. That became especially obvious when Republicans' "big, beautiful" bill proposed straight-up barring states and localities from implementing their own regulations for a full decade if they accepted certain federal funding for AI infrastructure. That provision has just been struck from the bill, though, as the Senate voted 99-1 to cut the moratorium. The idea had critics on both sides of the aisle. While Democrats have pushed back on the bill in general, the AI provision in particular has seen conservative blowback as well. According to a report by the Associated Press, Arkansas Gov. Sarah Huckabee Sanders and "a majority of GOP governors" wrote a letter opposing it. A few conservatives tried to salvage it, but the effort didn't go anywhere. Republican Sen. Marsha Blackburn called for a revised proposal that would shorten it and exempt laws relating to things like "deceptive practices" and "child sexual abuse material," but worked out an amendment with Democratic Sen. Maria Cantwell Monday to strike the whole thing. Given the ethical, intellectual, and environmental concerns raised by generative AI as it rapidly develops, it hardly seems an appropriate time to muzzle regulators. Big tech isn't likely to be thrilled. AI proponents tend to downplay the stakes here, and while OpenAI CEO Sam Altman has called for US AI regulation in the past, he isn't quite so keen these days -- testifying in May that "it is very difficult to imagine us figuring out how to comply with 50 different sets of regulation." Maybe his perspective changed when he needed ChatGPT to help raise his infant son. The AI provision was a small part of the wide-ranging bill, which Republicans aim to pass by July 4 and Democrats hope to block. The AP has more info on the bill being updated live as we speak.
[37]
Senate debates revised state AI regulation ban
Two key U.S. Republican senators agreed to a revised federal moratorium on state regulation of artificial intelligence to five years and allow states to adopt rules on child online safety and protecting artists' image or likeliness. Senate Commerce Committee chair Ted Cruz originally proposed securing compliance by blocking states that regulate AI from a $42 billion broadband infrastructure fund as part of a broad tax and budget bill. A revised version released last week would only restrict states regulating AI from tapping a new $500 million fund to support AI infrastructure. Under a compromise announced Sunday by Senator Marsha Blackburn, a critic of the state AI regulatory moratorium, the proposed 10-year moratorium would be cut to five years and allow states to regulate issues like protecting artists' voices or child online safety if they do not impose an "undue or disproportionate burden" on AI.
[38]
Senate strikes AI provision from GOP bill after uproar from the states
WASHINGTON (AP) -- A proposal to deter states from regulating artificial intelligence for a decade was soundly defeated in the U.S. Senate on Tuesday, thwarting attempts to insert the measure into President Donald Trump's big bill of tax breaks and spending cuts. The Senate voted 99-1 to strike the AI provision for the legislation after weeks of criticism from both Republican and Democratic governors and state officials. Originally proposed as a 10-year ban on states doing anything to regulate AI, lawmakers later tied it to federal funding so that only states that backed off on AI regulations would be able to get subsidies for broadband internet or AI infrastructure. A last-ditch Republican effort to save the provision would have reduced the time frame to five years and sought to exempt some favored AI laws, such as those protecting children or country music performers from harmful AI tools. But that effort was abandoned when Sen. Marsha Blackburn, a Tennessee Republican, teamed up with Democratic Sen. Maria Cantwell of Washington on Monday night to introduce an amendment to strike the entire proposal. Voting on the amendment happened after 4 a.m. Wednesday as part of an overnight session as Republican leaders sought to secure support for the tax cut bill while fending off other proposed amendments, mostly from Democrats trying to defeat the package. Proponents of an AI moratorium had argued that a patchwork of state and local AI laws is hindering progress in the AI industry and the ability of U.S. firms to compete with China. Some prominent tech leaders welcomed the idea after Republican Sen. Ted Cruz of Texas, who leads the Senate Commerce committee, floated it at a hearing in May. But state and local lawmakers and AI safety advocates argued that the rule is a gift to an industry that wants to avoid accountability for its products. Led by Arkansas Gov. Sarah Huckabee Sanders, a majority of GOP governors sent a letter to Congress opposing it. Also appealing to lawmakers to strike the provision was a group of parents of children who have died as a result of online harms.
[39]
Senate strikes AI provision from GOP bill after uproar from the states
WASHINGTON -- A proposal to deter states from regulating artificial intelligence for a decade was soundly defeated in the U.S. Senate on Tuesday, thwarting attempts to insert the measure into President Donald Trump's big bill of tax breaks and spending cuts. The Senate voted 99-1 to strike the AI provision from the legislation after weeks of criticism from both Republican and Democratic governors and state officials. Originally proposed as a 10-year ban on states doing anything to regulate AI, lawmakers later tied it to federal funding so that only states that backed off on AI regulations would be able to get subsidies for broadband internet or AI infrastructure. A last-ditch Republican effort to save the provision would have reduced the time frame to five years and sought to exempt some favored AI laws, such as those protecting children or country music performers from harmful AI tools. But that effort was abandoned when Sen. Marsha Blackburn, a Tennessee Republican, teamed up with Democratic Sen. Maria Cantwell of Washington on Monday night to introduce an amendment to strike the entire proposal. Voting on the amendment happened after 4 a.m. Tuesday as part of an overnight session as Republican leaders sought to secure support for the tax cut bill while fending off other proposed amendments, mostly from Democrats trying to defeat the package. Proponents of an AI moratorium had argued that a patchwork of state and local AI laws is hindering progress in the AI industry and the ability of U.S. firms to compete with China. Some prominent tech leaders welcomed the idea after Republican Sen. Ted Cruz of Texas, who leads the Senate Commerce committee, floated it at a hearing in May. But state and local lawmakers and AI safety advocates argued that the rule is a gift to an industry that wants to avoid accountability for its products. Led by Arkansas Gov. Sarah Huckabee Sanders, a majority of GOP governors sent a letter to Congress opposing it. Also appealing to lawmakers to strike the provision was a group of parents of children who have died as a result of online harms.
[40]
How a GOP rift over tech regulation doomed a ban on state AI laws in Trump's tax bill
NEW YORK (AP) -- A controversial bid to deter states from regulating artificial intelligence for a decade seemed on its way to passing as the Republican tax cut and spending bill championed by President Donald Trump worked its way through the U.S. Senate. But as the bill neared a final vote, a relentless campaign against it by a constellation of conservatives -- including Republican governors, lawmakers, think tanks and social groups -- had been eroding support. One, conservative activist Mike Davis, appeared on the show of right-wing podcaster Steve Bannon, urging viewers to call their senators to reject this "AI amnesty" for "trillion-dollar Big Tech monopolists." He said he also texted with Trump directly, advising the president to stay neutral on the issue despite what Davis characterized as significant pressure from White House AI czar David Sacks, Commerce Secretary Howard Lutnick, Texas Sen. Ted Cruz and others. Conservatives passionate about getting rid of the provision had spent weeks fighting others in the party who favored the legislative moratorium because they saw it as essential for the country to compete against China in the race for AI dominance. The schism marked the latest and perhaps most noticeable split within the GOP about whether to let states continue to put guardrails on emerging technologies or minimize such interference. In the end, the advocates for guardrails won, revealing the enormous influence of a segment of the Republican Party that has come to distrust Big Tech. They believe states must remain free to protect their citizens against potential harms of the industry, whether from AI, social media or emerging technologies. "Tension in the conservative movement is palpable," said Adam Thierer of the R Street Institute, a conservative-leaning think tank. Thierer first proposed the idea of the AI moratorium last year. He noted "the animus surrounding Big Tech" among many Republicans. "That was the differentiating factor." Conservative v. conservative in a last-minute fight The Heritage Foundation, children's safety groups and Republican state lawmakers, governors and attorneys general all weighed in against the AI moratorium. Democrats, tech watchdogs and some tech companies opposed it, too. Sensing the moment was right on Monday night, Republican Sen. Marsha Blackburn of Tennessee, who opposed the AI provision and had attempted to water it down, teamed up with Democratic Sen. Maria Cantwell of Washington to suggest striking the entire proposal. By morning, the provision was removed in a 99-1 vote. The whirlwind demise of a provision that initially had the backing of House and Senate leadership and the White House disappointed other conservatives who felt it gave China, a main AI competitor, an advantage. Ryan Fournier, chairman of Students for Trump and chief marketing officer of the startup Uncensored AI, had supported the moratorium, writing on X that it "stops blue states like California and New York from handing our future to Communist China." "Republicans are that way ... I get it," he said in an interview, but added there needs to be "one set of rules, not 50" for AI innovation to be successful. AI advocates fear a patchwork of state rules Tech companies, tech trade groups, venture capitalists and multiple Trump administration figures had voiced their support for the provision that would have blocked states from passing their own AI regulations for years. They argued that in the absence of federal standards, letting the states take the lead would leave tech innovators mired in a confusing patchwork of rules. Lutnick, the commerce secretary, posted that the provision "makes sure American companies can develop cutting-edge tech for our military, infrastructure, and critical industries -- without interference from anti-innovation politicians." AI czar Sacks had also publicly supported the measure. After the Senate passed the bill without the AI provision, the White House responded to an inquiry for Sacks with the president's position, saying Trump "is fully supportive of the Senate-passed version of the One, Big, Beautiful Bill." Acknowledging defeat of his provision on the Senate floor, Cruz noted how pleased China, liberal politicians and "radical left-wing groups" would be to hear the news. But Blackburn pointed out that the federal government has failed to pass laws that address major concerns about AI, such as keeping children safe and securing copyright protections. "But you know who has passed it?" she said. "The states." Conservatives want to win the AI race, but disagree on how Conservatives distrusting Big Tech for what they see as social media companies stifling speech during the COVID-19 pandemic and surrounding elections said that tech companies shouldn't get a free pass, especially on something that carries as much risk as AI. Many who opposed the moratorium also brought up preserving states' rights, though proponents countered that AI issues transcend state borders and Congress has the power to regulate interstate commerce. Eric Lucero, a Republican state lawmaker in Minnesota, noted that many other industries already navigate different regulations established by both state and local jurisdictions. "I think everyone in the conservative movement agrees we need to beat China," said Daniel Cochrane from the Heritage Foundation. "I just think we have different prescriptions for doing so." Many argued that in the absence of federal legislation, states were best positioned to protect citizens from the potential harms of AI technology. "We have no idea what AI will be capable of in the next 10 years and giving it free rein and tying states hands is potentially dangerous," Rep. Marjorie Taylor Greene wrote on X. A call for federal rules Another Republican, Texas state Sen. Angela Paxton, wrote to Cruz and his counterpart, Sen. John Cornyn, urging them to remove the moratorium. She and other conservatives said some sort of federal standard could help clarify the landscape around AI and resolve some of the party's disagreements. But with the moratorium dead and Republicans holding only narrow majorities in both chambers of Congress, it's unclear whether they will be able to agree on a set of standards to guide the development of the burgeoning technology. In an email to The Associated Press, Paxton said she wants to see limited federal AI legislation "that sets some clear guardrails" around national security and interstate commerce, while leaving states free to address issues that affect their residents. "When it comes to technology as powerful and potentially dangerous as AI, we should be cautious about silencing state-level efforts to protect consumers and children," she said. ___ Associated Press writer Matt Brown in Washington contributed to this report.
[41]
How a GOP rift over tech regulation doomed a ban on state AI laws in Trump's tax bill
NEW YORK -- A controversial bid to deter states from regulating artificial intelligence for a decade seemed on its way to passing as the Republican tax cut and spending bill championed by President Donald Trump worked its way through the U.S. Senate. But as the bill neared a final vote, a relentless campaign against it by a constellation of conservatives -- including Republican governors, lawmakers, think tanks and social groups -- had been eroding support. One, conservative activist Mike Davis, appeared on the show of right-wing podcaster Steve Bannon, urging viewers to call their senators to reject this "AI amnesty" for "trillion-dollar Big Tech monopolists." He said he also texted with Trump directly, advising the president to stay neutral on the issue despite what Davis characterized as significant pressure from White House AI czar David Sacks, Commerce Secretary Howard Lutnick, Texas Sen. Ted Cruz and others. Conservatives passionate about getting rid of the provision had spent weeks fighting others in the party who favored the legislative moratorium because they saw it as essential for the country to compete against China in the race for AI dominance. The schism marked the latest and perhaps most noticeable split within the GOP about whether to let states continue to put guardrails on emerging technologies or minimize such interference. In the end, the advocates for guardrails won, revealing the enormous influence of a segment of the Republican Party that has come to distrust Big Tech. They believe states must remain free to protect their citizens against potential harms of the industry, whether from AI, social media or emerging technologies. "Tension in the conservative movement is palpable," said Adam Thierer of the R Street Institute, a conservative-leaning think tank. Thierer first proposed the idea of the AI moratorium last year. He noted "the animus surrounding Big Tech" among many Republicans. "That was the differentiating factor." The Heritage Foundation, children's safety groups and Republican state lawmakers, governors and attorneys general all weighed in against the AI moratorium. Democrats, tech watchdogs and some tech companies opposed it, too. Sensing the moment was right on Monday night, Republican Sen. Marsha Blackburn of Tennessee, who opposed the AI provision and had attempted to water it down, teamed up with Democratic Sen. Maria Cantwell of Washington to suggest striking the entire proposal. By morning, the provision was removed in a 99-1 vote. The whirlwind demise of a provision that initially had the backing of House and Senate leadership and the White House disappointed other conservatives who felt it gave China, a main AI competitor, an advantage. Ryan Fournier, chairman of Students for Trump and chief marketing officer of the startup Uncensored AI, had supported the moratorium, writing on X that it "stops blue states like California and New York from handing our future to Communist China." "Republicans are that way ... I get it," he said in an interview, but added there needs to be "one set of rules, not 50" for AI innovation to be successful. Tech companies, tech trade groups, venture capitalists and multiple Trump administration figures had voiced their support for the provision that would have blocked states from passing their own AI regulations for years. They argued that in the absence of federal standards, letting the states take the lead would leave tech innovators mired in a confusing patchwork of rules. Lutnick, the commerce secretary, posted that the provision "makes sure American companies can develop cutting-edge tech for our military, infrastructure, and critical industries -- without interference from anti-innovation politicians." AI czar Sacks had also publicly supported the measure. After the Senate passed the bill without the AI provision, the White House responded to an inquiry for Sacks with the president's position, saying Trump "is fully supportive of the Senate-passed version of the One, Big, Beautiful Bill." Acknowledging defeat of his provision on the Senate floor, Cruz noted how pleased China, liberal politicians and "radical left-wing groups" would be to hear the news. But Blackburn pointed out that the federal government has failed to pass laws that address major concerns about AI, such as keeping children safe and securing copyright protections. "But you know who has passed it?" she said. "The states." Conservatives distrusting Big Tech for what they see as social media companies stifling speech during the COVID-19 pandemic and surrounding elections said that tech companies shouldn't get a free pass, especially on something that carries as much risk as AI. Many who opposed the moratorium also brought up preserving states' rights, though proponents countered that AI issues transcend state borders and Congress has the power to regulate interstate commerce. Eric Lucero, a Republican state lawmaker in Minnesota, noted that many other industries already navigate different regulations established by both state and local jurisdictions. "I think everyone in the conservative movement agrees we need to beat China," said Daniel Cochrane from the Heritage Foundation. "I just think we have different prescriptions for doing so." Many argued that in the absence of federal legislation, states were best positioned to protect citizens from the potential harms of AI technology. "We have no idea what AI will be capable of in the next 10 years and giving it free rein and tying states hands is potentially dangerous," Rep. Marjorie Taylor Greene wrote on X. Another Republican, Texas state Sen. Angela Paxton, wrote to Cruz and his counterpart, Sen. John Cornyn, urging them to remove the moratorium. She and other conservatives said some sort of federal standard could help clarify the landscape around AI and resolve some of the party's disagreements. But with the moratorium dead and Republicans holding only narrow majorities in both chambers of Congress, it's unclear whether they will be able to agree on a set of standards to guide the development of the burgeoning technology. In an email to The Associated Press, Paxton said she wants to see limited federal AI legislation "that sets some clear guardrails" around national security and interstate commerce, while leaving states free to address issues that affect their residents. "When it comes to technology as powerful and potentially dangerous as AI, we should be cautious about silencing state-level efforts to protect consumers and children," she said. ___ Associated Press writer Matt Brown in Washington contributed to this report.
[42]
US Senate Debates Whether to Adopt Revised State AI Regulation Ban
WASHINGTON (Reuters) -Two key U.S. Republican senators agreed to a revised federal moratorium on state regulation of artificial intelligence to five years and allow states to adopt rules on child online safety and protecting artists' image or likeliness. Senate Commerce Committee chair Ted Cruz originally proposed securing compliance by blocking states that regulate AI from a $42 billion broadband infrastructure fund as part of a broad tax and budget bill. A revised version released last week would only restrict states regulating AI form tapping a new $500 million fund to support AI infrastructure. Under a compromise announced Sunday by Senator Marsha Blackburn, a critic of the state AI regulatory moratorium, the proposed 10-year moratorium would be cut to five years and allow states to regulate issues like protecting artists' voices or child online safety if they do not impose an "undue or disproportionate burden" on AI. Tennessee passed a law last year dubbed the ELVIS Act to protect songwriters and performers from the use of AI to make unauthorized fake works in the image and voice of well-known artists. Texas approved legislation to bar AI use for the creation of child pornography or to encourage a person to commit physical self-harm or commit crime. It is not clear if the change will be enough to assuage concerns. On Friday, 17 Republican governors urged the Senate to drop the AI plan. "We cannot support a provision that takes away states' powers to protect our citizens. Let states function as the laboratories of democracy they were intended to be and allow state leaders to protect our people," said the governors led by Arkansas' Sarah Huckabee Sanders. U.S. Commerce Secretary Howard Lutnick voiced his support for the revised measure calling it a pragmatic compromise. "Congress should stand by the Cruz provision to keep America First in AI," Lutnick wrote on X. Congress has failed for years to pass any meaningful AI regulations or safety measures. Senate Maria Cantwell, the top Democrat on the Commerce Committee, said the Blackburn Cruz amendment "does nothing to protect kids or consumers. It's just another giveaway to tech companies." Cantwell said Lutnick could simply opt to strip states of internet funding if they did not agree to the moratorium. (Reporting by David Shepardson; Editing by Chizu Nomiyama )
[43]
GOP senators reach deal on AI regulation ban
Sen. Marsha Blackburn (R-Tenn.) said Sunday that she reached a deal with Senate Commerce Chair Ted Cruz (R-Texas) on new text for a provision in President Trump's sweeping tax package that seeks to bar states from regulating artificial intelligence (AI). The updated text would enact a "temporary pause," banning states from regulating AI for five years if they want access to $500 million in AI infrastructure and deployment funding included in the bill. The original provision, which Blackburn opposed, sought to limit state legislation for a 10-year period. It also includes new exemptions for state laws seeking to regulate unfair or deceptive practices, children's online safety, child sexual abuse material and publicity rights. "For decades, Congress has proven incapable of passing legislation to govern the virtual space and protect Americans from being exploited by Big Tech, and it's why I continue to fight to pass federal protections for Tennesseans and Americans alike," Blackburn said in a statement. "To ensure we do not decimate the progress states like Tennessee have made to stand in the gap, I am pleased Chairman Cruz has agreed to update the AI provision to exempt state laws that protect kids, creators, and other vulnerable individuals from the unintended consequences of AI," she continued. Blackburn has been a key proponent of legislation seeking to protect kids online. She reintroduced the Kids Online Safety Act (KOSA) last month alongside Sen. Richard Blumenthal (D-Conn.), Senate Majority Leader John Thune (R-S.D.) and Senate Minority Leader Chuck Schumer (D-N.Y.). "I look forward to working with him in the coming months to hold Big Tech accountable -- including by passing the Kids Online Safety Act and an online privacy framework that gives consumers more power over their data," she added. "It's time to get the One Big Beautiful Bill Act to the President's desk so we can deliver on our promise of enacting the America First agenda." It's unclear whether Blackburn and Cruz's deal on the AI provision will resolve the concerns of other lawmakers who have previously voiced opposition to the measure, including Sens. Ron Johnson (R-Wis.) and Josh Hawley (R-Mo.) and Rep. Marjorie Taylor Greene (R-Ga.). The provision survived scrutiny from Senate Parliamentarian Elizabeth MacDonough last week, who ruled that the AI moratorium did not violate the Byrd rule and can remain in the reconciliation bill. The Senate is expected to move forward with a series of votes on the package Monday morning, as they race to get the bill across the finish line before Trump's self-imposed deadline of July 4.
[44]
US Senate Strikes AI Regulation Ban From Trump Megabill
WASHINGTON (Reuters) -The Republican-led U.S. Senate voted overwhelmingly on Tuesday to remove a 10-year federal moratorium on state regulation of artificial intelligence from President Trump's sweeping tax-cut and spending bill. Lawmakers voted 99-1 to strike the ban from the bill by adopting an amendment offered by Republican Senator Marsha Blackburn. The action came during a marathon session known as a "vote-a-rama," in which lawmakers offered numerous amendments to the legislation that Republicans eventually hope to pass. Republican Senator Thom Tillis was the lone lawmaker who voted to retain the ban. The Senate version of Trump's legislation would have only restricted states regulating AI from tapping a new $500 million fund to support AI infrastructure. Major AI companies, including Alphabet's Google and OpenAI, have expressed support for Congress taking AI regulation out of the hands of states to free innovation from a panoply of differing requirements. Blackburn presented her amendment to strike the provision a day after agreeing to compromise language with Senate Commerce Committee chair Ted Cruz that would have cut the ban to five years and allowed states to regulate issues such as protecting artists' voices or child online safety if they did not impose an "undue or disproportionate burden" on AI. But Blackburn withdrew her support for the compromise before the amendment vote. "The current language is not acceptable to those who need these protections the most," the Tennessee Republican said in a statement. "Until Congress passes federally preemptive legislation like the Kids Online Safety Act and an online privacy framework, we can't block states from making laws that protect their citizens." (Reporting by David Morgan, Editing by William Maclean and Alex Richardson)
[45]
Senate Strikes AI Provision From GOP Bill After Uproar From the States
WASHINGTON (AP) -- A proposal to deter states from regulating artificial intelligence for a decade was soundly defeated in the U.S. Senate on Tuesday, thwarting attempts to insert the measure into President Donald Trump's big bill of tax breaks and spending cuts. The Senate voted 99-1 to strike the AI provision for the legislation after weeks of criticism from both Republican and Democratic governors and state officials. Originally proposed as a 10-year ban on states doing anything to regulate AI, lawmakers later tied it to federal funding so that only states that backed off on AI regulations would be able to get subsidies for broadband internet or AI infrastructure. A last-ditch Republican effort to save the provision would have reduced the time frame to five years and sought to exempt some favored AI laws, such as those protecting children or country music performers from harmful AI tools. But that effort was abandoned when Sen. Marsha Blackburn, a Tennessee Republican, teamed up with Democratic Sen. Maria Cantwell of Washington on Monday night to introduce an amendment to strike the entire proposal. Voting on the amendment happened after 4 a.m. Wednesday as part of an overnight session as Republican leaders sought to secure support for the tax cut bill while fending off other proposed amendments, mostly from Democrats trying to defeat the package. Proponents of an AI moratorium had argued that a patchwork of state and local AI laws is hindering progress in the AI industry and the ability of U.S. firms to compete with China. Some prominent tech leaders welcomed the idea after Republican Sen. Ted Cruz of Texas, who leads the Senate Commerce committee, floated it at a hearing in May. But state and local lawmakers and AI safety advocates argued that the rule is a gift to an industry that wants to avoid accountability for its products. Led by Arkansas Gov. Sarah Huckabee Sanders, a majority of GOP governors sent a letter to Congress opposing it. Also appealing to lawmakers to strike the provision was a group of parents of children who have died as a result of online harms. Copyright 2025 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.
[46]
Blackburn says AI deal with Cruz is off
Sen. Marsha Blackburn (R-Tenn.) said Monday that a deal to update language of a provision in President Trump's tax package seeking to bar states from regulating artificial intelligence (AI) is off. Just one day earlier, Blackburn announced she had reached an agreement with Senate Commerce Chair Ted Cruz (R-Texas) on new text that would bar states from regulating AI for five years and featured exemptions for laws on child online safety and publicity rights. However, she pulled support for the updated provision Monday evening. "While I appreciate Chairman Cruz's efforts to find acceptable language that allows states to protect their citizens from the abuses of AI, the current language is not acceptable to those who need these protections the most," Blackburn said in a statement. "This provision could allow Big Tech to continue to exploit kids, creators, and conservatives," she continued. "Until Congress passes federally preemptive legislation like the Kids Online Safety Act and an online privacy framework, we can't block states from making laws that protect their citizens." Blackburn has been a key proponent of the Kids Online Safety Act (KOSA), which she reintroduced last month alongside Sen. Richard Blumenthal (D-Conn.) and Senate leadership. "For as long as I've been in Congress, I've worked alongside federal and state legislators, parents seeking to protect their kids online, and the creative community in Tennessee to fight back against Big Tech's exploitation by passing legislation to govern the virtual space," she added. The Tennessee Republican now plans to co-sponsor Sen. Maria Cantwell's (D-Wash.) amendment to strip the AI provision from the reconciliation bill, in addition to filing her own amendment, according to Cantwell's office. Cantwell, in turn, also plans to co-sponsor Blackburn's amendment, alongside Sens. Susan Collins (R-Maine) and Ed Markey (D-Mass.). Earlier in the day, she had slammed the deal between Blackburn and Cruz, arguing it did "nothing to protect kids or consumers." "It's just another giveaway to tech companies," Cantwell, who serves as the top Democrat on the Senate Commerce Committee, said in a statement. "This provision gives AI and social media a brand-new shield against litigation and state regulation. This is Section 230 on steroids." "And when [Commerce Secretary] Howard Lutnick has the authority to force states to take this deal or lose all of their BEAD funding, consumers will find out just how catastrophic this deal is," she added. The provision is tied to $500 million in AI infrastructure and deployment funding under the Broadband Equity, Access and Deployment (BEAD) program. Under the updated language, if states want access to the funds, they cannot regulate AI for five years. The measure previously sought to bar state regulation for a 10-year period. Blackburn's last-minute shift on the provision comes at a key moment, as the Senate has been voting for hours on amendments to Trump's "big, beautiful bill," which Republicans are hoping to get across the finish line before the July 4 holiday.
[47]
Greene touts removal of AI provision from GOP megabill as 'huge win for federalism'
Rep. Marjorie Taylor Greene (R-Ga.) cheered the Senate's decision Tuesday to remove a provision from Republicans' sweeping tax bill that would have barred states from regulating artificial intelligence (AI), calling it a "huge win for federalism." Despite initially voting for the House version of the bill containing the AI moratorium, Greene has since become a vocal opponent of the measure, arguing it violates states' rights. "I told the White House I couldn't support the One Big Beautiful Bill with the AI moratorium inside," she wrote in an X post Tuesday, shortly before the Senate sent the tax package back to the House. "Banning states from regulating AI for 10 years is a gift to Big Tech and a disaster for American workers and states' rights," Greene continued. "Thanks to Senator @MarshaBlackburn, we got it OUT. That's a huge win for federalism and the America First agenda." She offered her thanks to Sen. Marsha Blackburn (R-Tenn.) for spearheading the amendment that ultimately removed the measure early Tuesday. "I just want to thank Sen. Marsha Blackburn," Greene told former White House aide Steve Bannon in a clip shared alongside her comments. "She did an incredible job. She has not had any sleep yet. She stayed up all night long fighting to get the AI moratorium out, and I'm really grateful for her fight." "I had told the White House I couldn't for it," she said, adding, "There's no way I can destroy states' rights, and there's no way I can let AI have free rein and the potential destruction that it could have for 10 years without states being able to protect themselves and the people that live there and their jobs and their children." The removal of the AI provision marked a sharp reversal, as the measure appeared poised to sail forward after Blackburn struck a deal with Senate Commerce Chair Ted Cruz (R-Texas) on new language late Sunday. The updated provision barred states from regulating AI for five years, down from 10 years, and featured exemptions for child online safety and publicity rights. However, Blackburn pulled her support Monday evening, saying the new language was still "not acceptable." She instead offered up an amendment to strip the AI moratorium from the GOP's reconciliation bill, which passed 99-1 early Tuesday. Cruz lamented on the Senate floor that their agreement was "set to pass" and Trump was on board, but "outside interests opposed the deal." He got behind Blackburn's amendment, acknowledging that "many of my colleagues would prefer not to vote on this matter."
[48]
Senate strips AI provision from megabill
The Senate voted early Tuesday morning to strip a provision barring states from regulating artificial intelligence (AI) from Republicans' megabill. The amendment, sponsored by Sen. Marsha Blackburn (R-Tenn.), was adopted 99-1, with only Sen. Thom Tillis (R-N.C.) voting against it. The removal of the provision marks a sharp turn of events for the AI moratorium. Blackburn announced Sunday she had reached a deal with Senate Commerce Chair Ted Cruz (R-Texas) on new language that would bar states from regulating AI for five years and featured exemptions for child online safety and publicity rights. However, she pulled support for the updated provision late Monday and instead offered up the amendment to strike the measure from President Trump's sweeping tax bill. Cruz ultimately got behind Blackburn's amendment early Tuesday, acknowledging that "many of my colleagues would prefer not to vote on this matter." "A few hours ago, we had an agreement that Blackburn-Cruz was set to pass," he said on the Senate floor of his earlier deal with the Tennessee Republican. "When I spoke to President Trump last year, last night, he said it was a terrific agreement," Cruz added. "The agreement protected kids and protected the rights of creative artists, but outside interests opposed that deal." Blackburn underscored her concerns with the language of the updated provision on the Senate floor. "I regret that we weren't able to come to a compromise that would protect our governors, our state legislators, our attorney generals and, of course, House members who have expressed concern over this language," she said. "I do want to thank Senator Cruz for the work and the time that he put in trying to find a resolution to this issue. I do appreciate that," Blackburn continued. "But what we know is this -- this body has proven that they cannot legislate on emerging technology." She noted Congress' inability to pass legislation on online privacy, AI and other tech issues. Blackburn has been a key proponent of the Kids Online Safety Act, which she reintroduced last month. The bill passed the Senate last year but failed to move forward in the House. "You know who has passed it?" she added. "It is our states. They're the ones that are protecting children in the virtual space. They're the ones that are out here protecting our entertainers' name, image, likeness, of broadcasters, podcasters, authors. And it is appropriate that we approach this issue with the seriousness that it deserves." The updated version of the provision would have barred states from regulating AI for five years if they wanted access to $500 million in AI infrastructure and deployment funding. It cut in half the timeline of the original provision, which sought to bar state AI regulation for 10 years.
[49]
How a GOP Rift Over Tech Regulation Doomed a Ban on State AI Laws in Trump's Tax Bill
NEW YORK (AP) -- A controversial bid to deter states from regulating artificial intelligence for a decade seemed on its way to passing as the Republican tax cut and spending bill championed by President Donald Trump worked its way through the U.S. Senate. But as the bill neared a final vote, a relentless campaign against it by a constellation of conservatives -- including Republican governors, lawmakers, think tanks and social groups -- had been eroding support. One, conservative activist Mike Davis, appeared on the show of right-wing podcaster Steve Bannon, urging viewers to call their senators to reject this "AI amnesty" for "trillion-dollar Big Tech monopolists." He said he also texted with Trump directly, advising the president to stay neutral on the issue despite what Davis characterized as significant pressure from White House AI czar David Sacks, Commerce Secretary Howard Lutnick, Texas Sen. Ted Cruz and others. Conservatives passionate about getting rid of the provision had spent weeks fighting others in the party who favored the legislative moratorium because they saw it as essential for the country to compete against China in the race for AI dominance. The schism marked the latest and perhaps most noticeable split within the GOP about whether to let states continue to put guardrails on emerging technologies or minimize such interference. In the end, the advocates for guardrails won, revealing the enormous influence of a segment of the Republican Party that has come to distrust Big Tech. They believe states must remain free to protect their citizens against potential harms of the industry, whether from AI, social media or emerging technologies. "Tension in the conservative movement is palpable," said Adam Thierer of the R Street Institute, a conservative-leaning think tank. Thierer first proposed the idea of the AI moratorium last year. He noted "the animus surrounding Big Tech" among many Republicans. "That was the differentiating factor." Conservative v. conservative in a last-minute fight The Heritage Foundation, children's safety groups and Republican state lawmakers, governors and attorneys general all weighed in against the AI moratorium. Democrats, tech watchdogs and some tech companies opposed it, too. Sensing the moment was right on Monday night, Republican Sen. Marsha Blackburn of Tennessee, who opposed the AI provision and had attempted to water it down, teamed up with Democratic Sen. Maria Cantwell of Washington to suggest striking the entire proposal. By morning, the provision was removed in a 99-1 vote. The whirlwind demise of a provision that initially had the backing of House and Senate leadership and the White House disappointed other conservatives who felt it gave China, a main AI competitor, an advantage. Ryan Fournier, chairman of Students for Trump and chief marketing officer of the startup Uncensored AI, had supported the moratorium, writing on X that it "stops blue states like California and New York from handing our future to Communist China." "Republicans are that way ... I get it," he said in an interview, but added there needs to be "one set of rules, not 50" for AI innovation to be successful. AI advocates fear a patchwork of state rules Tech companies, tech trade groups, venture capitalists and multiple Trump administration figures had voiced their support for the provision that would have blocked states from passing their own AI regulations for years. They argued that in the absence of federal standards, letting the states take the lead would leave tech innovators mired in a confusing patchwork of rules. Lutnick, the commerce secretary, posted that the provision "makes sure American companies can develop cutting-edge tech for our military, infrastructure, and critical industries -- without interference from anti-innovation politicians." AI czar Sacks had also publicly supported the measure. After the Senate passed the bill without the AI provision, the White House responded to an inquiry for Sacks with the president's position, saying Trump "is fully supportive of the Senate-passed version of the One, Big, Beautiful Bill." Acknowledging defeat of his provision on the Senate floor, Cruz noted how pleased China, liberal politicians and "radical left-wing groups" would be to hear the news. But Blackburn pointed out that the federal government has failed to pass laws that address major concerns about AI, such as keeping children safe and securing copyright protections. "But you know who has passed it?" she said. "The states." Conservatives want to win the AI race, but disagree on how Conservatives distrusting Big Tech for what they see as social media companies stifling speech during the COVID-19 pandemic and surrounding elections said that tech companies shouldn't get a free pass, especially on something that carries as much risk as AI. Many who opposed the moratorium also brought up preserving states' rights, though proponents countered that AI issues transcend state borders and Congress has the power to regulate interstate commerce. Eric Lucero, a Republican state lawmaker in Minnesota, noted that many other industries already navigate different regulations established by both state and local jurisdictions. "I think everyone in the conservative movement agrees we need to beat China," said Daniel Cochrane from the Heritage Foundation. "I just think we have different prescriptions for doing so." Many argued that in the absence of federal legislation, states were best positioned to protect citizens from the potential harms of AI technology. "We have no idea what AI will be capable of in the next 10 years and giving it free rein and tying states hands is potentially dangerous," Rep. Marjorie Taylor Greene wrote on X. A call for federal rules Another Republican, Texas state Sen. Angela Paxton, wrote to Cruz and his counterpart, Sen. John Cornyn, urging them to remove the moratorium. She and other conservatives said some sort of federal standard could help clarify the landscape around AI and resolve some of the party's disagreements. But with the moratorium dead and Republicans holding only narrow majorities in both chambers of Congress, it's unclear whether they will be able to agree on a set of standards to guide the development of the burgeoning technology. In an email to The Associated Press, Paxton said she wants to see limited federal AI legislation "that sets some clear guardrails" around national security and interstate commerce, while leaving states free to address issues that affect their residents. "When it comes to technology as powerful and potentially dangerous as AI, we should be cautious about silencing state-level efforts to protect consumers and children," she said. ___ Associated Press writer Matt Brown in Washington contributed to this report. Copyright 2025 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.
[50]
US Senate debates whether to adopt revised state AI regulation ban - The Economic Times
A revised US Senate proposal reduces the federal moratorium on state AI regulation from 10 to five years, allowing states to address child safety and artists' rights. The compromise, backed by key Republicans, faces opposition from governors and Democrats who argue it weakens protections and favors tech firms over public interest.Two key US Republican senators agreed to a revised federal moratorium on state regulation of artificial intelligence to five years and allow states to adopt rules on child online safety and protecting artists' image or likeliness. Senate Commerce Committee chair Ted Cruz originally proposed securing compliance by blocking states that regulate AI from a $42 billion broadband infrastructure fund as part of a broad tax and budget bill. A revised version released last week would only restrict states regulating AI form tapping a new $500 million fund to support AI infrastructure. Under a compromise announced Sunday by Senator Marsha Blackburn, a critic of the state AI regulatory moratorium, the proposed 10-year moratorium would be cut to five years and allow states to regulate issues like protecting artists' voices or child online safety if they do not impose an "undue or disproportionate burden" on AI. Tennessee passed a law last year dubbed the ELVIS Act to protect songwriters and performers from the use of AI to make unauthorized fake works in the image and voice of well-known artists. Texas approved legislation to bar AI use for the creation of child pornography or to encourage a person to commit physical self-harm or commit crime. It is not clear if the change will be enough to assuage concerns. On Friday, 17 Republican governors urged the Senate to drop the AI plan. "We cannot support a provision that takes away states' powers to protect our citizens. Let states function as the laboratories of democracy they were intended to be and allow state leaders to protect our people," said the governors led by Arkansas' Sarah Huckabee Sanders. US Commerce Secretary Howard Lutnick voiced his support for the revised measure calling it a pragmatic compromise. "Congress should stand by the Cruz provision to keep America First in AI," Lutnick wrote on X. Congress has failed for years to pass any meaningful AI regulations or safety measures. Senate Maria Cantwell, the top Democrat on the Commerce Committee, said the Blackburn Cruz amendment "does nothing to protect kids or consumers. It's just another giveaway to tech companies." Cantwell said Lutnick could simply opt to strip states of internet funding if they did not agree to the moratorium.
[51]
US Senate strikes AI regulation ban from Trump megabill - The Economic Times
The US Senate voted 99-1 to remove a 10-year federal ban on state AI regulation from Trump's tax-and-spending bill. Senator Blackburn led the amendment, arguing for state-level protections. Major tech firms prefer federal control to avoid regulatory patchworks that could hinder innovation.The Republican-led US Senate voted overwhelmingly on Tuesday to remove a 10-year federal moratorium on state regulation of artificial intelligence from President Trump's sweeping tax-cut and spending bill. Lawmakers voted 99-1 to strike the ban from the bill by adopting an amendment offered by Republican Senator Marsha Blackburn. The action came during a marathon session known as a "vote-a-rama," in which lawmakers offered numerous amendments to the legislation that Republicans eventually hope to pass. Republican Senator Thom Tillis was the lone lawmaker who voted to retain the ban. The Senate version of Trump's legislation would have only restricted states regulating AI from tapping a new $500 million fund to support AI infrastructure. Major AI companies, including Alphabet's Google and OpenAI, have expressed support for Congress taking AI regulation out of the hands of states to free innovation from a panoply of differing requirements. Blackburn presented her amendment to strike the provision a day after agreeing to compromise language with Senate Commerce Committee chair Ted Cruz that would have cut the ban to five years and allowed states to regulate issues such as protecting artists' voices or child online safety if they did not impose an "undue or disproportionate burden" on AI. But Blackburn withdrew her support for the compromise before the amendment vote. "The current language is not acceptable to those who need these protections the most," the Tennessee Republican said in a statement. "Until Congress passes federally preemptive legislation like the Kids Online Safety Act and an online privacy framework, we can't block states from making laws that protect their citizens."
[52]
Defeat of a ten year ban on state AI laws is a blow to tech industry - The Economic Times
The Senate voted 99-1 to reject a proposed 10-year ban on state AI regulations, marking a major setback for the tech industry. Consumer groups and Democrats hailed the move, preserving states' authority to oversee AI. The amendment faced strong opposition despite support from Republicans, tech firms, and the Trump administration.The defeat early Tuesday of a ban on state laws for artificial intelligence dealt a major blow to the tech industry on the verge of a policy victory. In a 99-1 vote, the Senate voted overwhelmingly to strike an amendment to the Republican economic policy package that would have imposed a decadelong moratorium on attempts to regulate AI by the states. The before-sunrise vote was a win for consumer groups and Democrats, who had argued for weeks against the provision that they feared would remove any threat of oversight for the powerful AI industry. "The Senate came together tonight to say that we can't just run over good state consumer protection laws," Sen. Maria Cantwell, D-Wash., said in a statement. "States can fight robocalls, deepfakes and provide safe autonomous vehicle laws." There are no federal laws regulating AI but states have enacted dozens of laws that strengthen consumer privacy, ban AI-generated child sexual abuse material and outlaw deepfake videos of political candidates. All but a handful of states have some laws regulating artificial intelligence in place. It is an area of deep interest: All 50 have introduced bills in the past year tied to the issue. The Senate's provision, introduced in the Senate by Sen. Ted Cruz, R-Texas, sparked intense criticism by state attorneys general, child safety groups and consumer advocates who warned the amendment would give AI companies a clear runway to develop unproven and potentially dangerous technologies. The proposed ban on state AI laws stemmed from a proposal championed by Speaker Mike Johnson, R-La. On May 22, the House's approved version of the bill included the "Artificial Intelligence and Information Technology Modernization Initiative," a 10-year moratorium on state AI laws. Silicon Valley venture capital powerhouse Andreessen Horowitz and AI startups OpenAI and Anduril, a defense tech company, lobbied fiercely in favor of the amendment. They said it was too difficult for startups to comply with dozens of different state AI laws. The Trump administration also threw its support behind the proposal. Commerce Secretary Howard Lutnick called the moratorium a critical policy to advance American leadership in AI. "If we're serious about winning the AI race, we must prioritize investment and innovation," Lutnick posted on social media Monday. On Sunday, it appeared more likely the AI amendment might go through after Sen. Marsha Blackburn, R-Tenn., reached a compromise with Cruz on a shorter moratorium of five years. But the compromise included language that many legal experts said could neuter existing state laws. Blackburn withdrew her amendment written with Cruz late Monday and introduced a motion to strike his original amendment. "The Senate did the right thing today for kids, for families and for our future by voting to strip out the dangerous 10-year ban on state AI laws, which had no business being in a budget bill in the first place," Jim Steyer, CEO of the child safety group, Common Sense Media, said in a statement.
[53]
Senate Shoots Down 10-Year Ban on State AI Regulations | PYMNTS.com
By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions. As the Financial Times (FT) reported, the senators voted 99-1 in the early hours of Tuesday (July 1) for an amendment that removed wording about the ban from President Donald Trump's tax/spending bill. The vote, as the FT notes, is a defeat for Big Tech companies, who said the ban would help prevent inconsistent, state-by-state rules that could hinder innovation. "We want to be the leaders in AI and quantum and all these new technologies," Sen. Majority Leader John Thune (R-S.D.) said last week. "And the way to do that is not to come in with a heavy hand of government; it's to come in with a light touch." But the proposal had been criticized by other Republicans, who flagged concerns about keeping states from regulating a powerful and potentially disruptive technology. "I think it's terrible policy. It's a huge giveaway to some of the worst corporate actors out there," said Sen. John Hawley (R-Mo.), an opponent of the ban. Hawley's concerns echo those of a group of civil society groups, academic institutions, artists and technology workers who wrote to the U.S. House last to campaign against the ban. "This moratorium would mean that even if a company deliberately designs an algorithm that causes foreseeable harm -- regardless of how intentional or egregious the misconduct or how devastating the consequences -- the company making that bad tech would be unaccountable to lawmakers and the public," the group said in its letter. In other AI news, PYMNTS wrote earlier this week about growing skepticism about whether agentic AI -- which can autonomously complete tasks and take actions outside of human involvement -- can generate valid outcomes and be used ethically. A forthcoming report from PYMNTS Intelligence shows that while almost all chief financial officers (CFOs) at enterprise-level companies are familiar with agentic AI, only 15% are even considering putting it to work. Companies are evaluating and conducting trials, not embracing wholescale adoption. Agentic AI may be everywhere as a topic, but it's hardly a fixture of the business world. "What's clear in the data is that the companies exploring agentic AI have already gone deep with their use of generative AI, a less-but-still-advanced technology like ChatGPT that is used to create content, including reports, field customer service queries, code software and analyze data," PYMNTS wrote.
[54]
Republican senators cut deal to limit state AI rules in 'big,...
Two key GOP senators reached a deal over the weekend meant to pacify a growing House Republican revolt over the extent to which states can regulate artificial intelligence in the coming years. Under the agreement between Sen. Marsha Blackburn (R-Tenn.) and Senate Commerce Committee Chairman Ted Cruz (R-Texas), states who wish to access $500 million in AI infrastructure funding in President Trump's One Big Beautiful Bill Act must hold off on new rules governing the technology for five years. The deal includes carveouts to regulate child sexual abuse material, unauthorized use of a person's likeness and other deceptive practices. The original Senate version of the Big Beautiful Bill imposed a 10-year moratorium, which Blackburn opposed and which some House Republicans had suggested would cause them to kill the entire legislation. "For decades, Congress has proven incapable of passing legislation to govern the virtual space and protect Americans from being exploited by Big Tech, and it's why I continue to fight to pass federal protections for Tennesseans and Americans alike," Blackburn said in a statement. "To ensure we do not decimate the progress states like Tennessee have made to stand in the gap, I am pleased Chairman Cruz has agreed to update the AI provision to exempt state laws that protect kids, creators, and other vulnerable individuals from the unintended consequences of AI." Blackburn, who has publicly expressed interest in a run for governor of her home state next year, is far from the only Republican to have expressed concerns about the 10-year moratorium on state AI regulation. Last week, 17 Republican governors inked a letter to Senate Majority Leader John Thune (R-SD) and House Speaker Mike Johnson (R-La.) urging them to scrap the pause completely. Multiple House Republicans, most notably far-right Rep. Marjorie Taylor Greene (R-Ga.), claimed that the AI moratorium was a dealbreaker and demanded the Senate cut it out of the megabill. "Full transparency, I did not know about this," Greene admitted on X a few days after she voted for the One Big Beautiful Bill Act. "I am adamantly OPPOSED to this and it is a violation of state rights and I would have voted NO if I had known this was in there." A spokesperson for Greene did not immediately respond to a request for comment about whether the Blackburn-Cruz deal allays her concerns. Big Tech had pushed to restrict states from setting up their own rules on AI, backed by the Trump White House's crypto and AI czar, entrepreneur David Sacks. "I'm happy to see the senators worked out a deal that allows the moratorium to move forward," Sacks told The Post Monday. Opponents had tried to get the AI provision out of the megabill completely via the so-called "Byrd Bath" process, in which Senate Parliamentarian Elizabeth MacDonough examined the bill to see if it complied with rules allowing it to skirt the 60-vote filibuster. Ultimately, after Cruz's team modified the language, the parliamentarian approved the 10-year AI moratorium, but resistance from Blackburn and others remained. Despite the effort to reach a compromise, some critics argue that Blackburn's deal with Cruz is unlikely to protect the state laws she sought to save. As currently written, the bill text only provides exemptions for laws that do not impose an "undue or disproportionate burden" on AI systems, providing a loophole for Big Tech firms to argue in court that laws like Tennessee's ELVIS Act -- which bars AI systems from mimicking a person's voice without consent -- cannot stand. "The undue burden clause basically supersedes the carve-outs. Big Tech will argue that any kids' safety bill will cause undue burden on the entire AI system and therefore it can't be enforced," said one DC tech policy official who opposes the deal. "This limits where the pushback can be," the official said. "All of the sudden, the only people who can pass laws on AI are Congress, and it's easier for tech to kill legislation in Congress than it is [in] state houses." Sacha Haworth, executive director of the Tech Oversight Project, called the Blackburn-Cruz deal "terrible." "At a time when new details emerge every day about Big Tech's untested AI assistants and chatbots causing harm to people, especially kids, Congress should be encouraging more oversight, not AI amnesty," she said. The Senate was expected to vote on passage of the One Big Beautiful Bill later Monday, with the House set to take it up as early as Wednesday. Congressional GOP leadership has set a goal for Trump to sign the measure into law by Friday's July 4 holiday.
[55]
Hey, Congress: Please make sure the 'big bill' doesn't let Big Tech...
We're not convinced that Sen. Marsha Blackburn's excellent intervention is enough to fix an obscure part of the One Big Beautiful Bill Act that threatens to destroy . . . The Post, among others. It's generally described as "a 10-year moratorium on state regulation of Artificial Intelligence," but this key section of OBBBA actually seems to be a far broader license for Big Tech -- one that could protect it from any consequences for (among other wrongs) openly stealing the work product of any and all of the news industry to train AIs or simply sell ads of off other people's content. Blackburn (R-Tenn.), backed by other House and Senate Republicans, has won a compromise that 1) makes the restrictions last only five years, and 2) explicitly does not protect child-sexual-abuse material, unauthorized use of a person's likeness and other deceptive practices. Sounds good, but we'd suggest adding two more explicit provisions -- each, notwithstanding any other language in the measure: First, that this does not cover any tech in wide use as of, say, Jan. 1, 2024: Nothing gets "grandfathered in" under cover of protecting the nation's lead in developing AI. Second, that it does not grant any new criminal or civil immunity with regard to intellectual-property theft: Even if you're using our stuff for AI research, we can still sue to make you pay. We understand that 50 states each trying to set legal frameworks for AI development could block vital innovations with an ocean of red tape; that's a legitimate worry. "Move fast and break things" is one popular tech slogan; lawmakers should take care they don't license the industry to break us.
[56]
'Big beautiful' AI rule means feds must act NOW to stop Big Tech's...
Deep within President Donald Trump's One Big Beautiful Bill Act -- the major spending legislation he wants to see by July 4 -- is a rule that holds enormous implications for the rapidly developing artificial intelligence sector. The Senate is debating a provision that would prevent state governments from regulating the AI industry for years. Supporters claim this moratorium would stop a patchwork of conflicting state laws from slowing AI's rocketing development. But without subsequent federal action, a moratorium on state regulation risks making the AI industry a law-free zone, where Big Tech companies can essentially do whatever they want with an untested, sometimes exploitative new technology. We've needed federal regulation on AI companies for some time, but if this new moratorium passes, it will become even more urgent for Congress to act. If the AI industry is going to grow sustainably and responsibly, we need legislation to provide guardrails and clear rules about how to protect the creators of content that AI tools use -- publishers, authors, journalists, artists, musicians and creatives of all types. Right now, those content creators are AI's victims. Big Tech and AI companies scrape vast amounts of content to build and operate their generative AI products, which turn content into GenAI outputs for users. Sometimes they just reproduce content creators' passages word for word -- without credit or compensation. AI companies admit these unfair and un-American tactics are fundamental to their businesses, but they refuse to pay because it's cheaper to steal. Even worse, this predatory behavior lets AI models act as information gatekeepers. If Big Tech is left to its own devices, Americans will have less access to accurate information, and certainly no one to hold accountable for errors and mistakes. Reporting on stories that Americans need to know will dwindle as the AI companies undermine the business models of publishers, opening the door to viewpoint suppression and creating opportunities for foreign propagandists. How dire these problems will become is a matter of guesswork -- because AI development is currently a black box. Developers do not share information on whether or how they are obtaining consent for using publisher content. (News reports suggest that when they do share information about these methods, it is sometimes misleading.) Publishers must hire experts to reverse-engineer how their content has been taken, a costly process that overburdens small publishers and can't always identify all works that were used in training the models. This lack of transparency hinders the enforcement of intellectual property rights and distorts regulatory decisions, business development and more. Federal legislation could address these issues by requiring recordkeeping and full disclosure. AI companies must let publishers know whether a generative AI model was trained on their work -- and must also explain whether certain publications have been specifically excluded from AI models, so that the public can judge any bias. Further, AI companies must disclose the sources they use to keep their models' responses current. Simple rules such as these will prompt commercial GenAI developers to enter agreements with publishers to use their content -- agreements that will likely block AI companies and foreign actors from distorting the news that the public receives. The benefits will be widespread. These rules would strengthen America's position in the AI race by making its products more trustworthy and preserving the journalism that lies at its foundation. Protecting intellectual property and homegrown content is what gives American AI companies an international competitive edge. Strong federal rules will also keep many small media businesses viable, and protecting thousands of workers and their communities. The White House blueprint for AI wisely recognizes that AI development must be responsible and aligned with American values, including respect for intellectual property and the rule of law. If Congress is going to act, as it appears it might, to limit the ability of states to enact these important regulations, then it's up to the House and Senate to fill that gap -- and set this growing but potentially dangerous industry on a solid foundation.
[57]
US Senate strikes AI regulation ban from Trump megabill
WASHINGTON (Reuters) -The Republican-led U.S. Senate voted overwhelmingly on Tuesday to remove a 10-year federal moratorium on state regulation of artificial intelligence from President Trump's sweeping tax-cut and spending bill. Lawmakers voted 99-1 to strike the ban from the bill by adopting an amendment offered by Republican Senator Marsha Blackburn. The action came during a marathon session known as a "vote-a-rama," in which lawmakers offered numerous amendments to the legislation that Republicans eventually hope to pass. Republican Senator Thom Tillis was the lone lawmaker who voted to retain the ban. The Senate version of Trump's legislation would have only restricted states regulating AI from tapping a new $500 million fund to support AI infrastructure. Major AI companies, including Alphabet's Google and OpenAI, have expressed support for Congress taking AI regulation out of the hands of states to free innovation from a panoply of differing requirements. Blackburn presented her amendment to strike the provision a day after agreeing to compromise language with Senate Commerce Committee chair Ted Cruz that would have cut the ban to five years and allowed states to regulate issues such as protecting artists' voices or child online safety if they did not impose an "undue or disproportionate burden" on AI. But Blackburn withdrew her support for the compromise before the amendment vote. "The current language is not acceptable to those who need these protections the most," the Tennessee Republican said in a statement. "Until Congress passes federally preemptive legislation like the Kids Online Safety Act and an online privacy framework, we can't block states from making laws that protect their citizens." (Reporting by David Morgan, Editing by William Maclean and Alex Richardson)
Share
Copy Link
The U.S. Senate voted 99-1 against Senator Ted Cruz's proposal to ban state-level AI regulation, highlighting bipartisan opposition to the measure and emphasizing the importance of consumer protection in AI governance.
In a stunning display of bipartisan unity, the U.S. Senate voted 99-1 against Senator Ted Cruz's (R-Texas) proposal to impose a moratorium on state-level artificial intelligence (AI) regulation 1. This decisive vote marks a significant shift in the ongoing debate over AI governance and highlights the importance of consumer protection in the rapidly evolving field of AI technology.
Cruz's initial proposal, included in a budget reconciliation bill, sought to ban state AI regulation for 10 years 2. The plan faced immediate opposition from both Democrats and Republicans, leading to several revisions:
Despite these modifications, the proposal continued to face strong opposition from lawmakers and various stakeholders.
The rejection of Cruz's plan united politicians across the political spectrum. Key concerns included:
Senator Maria Cantwell (D-Wash.) emphasized the importance of preserving state consumer protection laws, while Senator Marsha Blackburn (R-Tenn.) expressed concerns about Big Tech exploiting children, creators, and conservatives 14.
The proposed moratorium had garnered support from some prominent tech industry figures, including OpenAI's Sam Altman and a16z's Marc Andreessen 3. They argued that a patchwork of state regulations could hinder innovation and America's competitiveness in the global AI race.
However, critics, including Anthropic's CEO Dario Amodei and various consumer rights advocates, warned that the moratorium could allow AI companies to operate with minimal oversight 3. This debate highlighted the complex balance between fostering innovation and ensuring responsible AI development.
The overwhelming rejection of Cruz's proposal signals a clear preference for maintaining states' ability to regulate AI. This outcome suggests that:
As AI technology continues to advance rapidly, the challenge of creating effective and balanced regulation remains at the forefront of policy discussions. The Senate's decision marks a significant moment in this ongoing debate, emphasizing the need for collaborative approaches that protect consumers while fostering innovation in the AI sector.
Summarized by
Navi
Google's AI-generated summaries in search results have sparked an EU antitrust complaint from independent publishers, citing harm to traffic, readership, and revenue.
5 Sources
Policy and Regulation
11 hrs ago
5 Sources
Policy and Regulation
11 hrs ago
An Xbox executive's suggestion to use AI tools for emotional support and career guidance following Microsoft's layoffs has sparked controversy and criticism within the gaming industry.
5 Sources
Technology
11 hrs ago
5 Sources
Technology
11 hrs ago
Billionaire Mark Cuban forecasts that AI's untapped potential could lead to unprecedented wealth creation, possibly producing the world's first trillionaire from an unexpected source.
2 Sources
Technology
12 hrs ago
2 Sources
Technology
12 hrs ago
Meta's aggressive AI talent recruitment efforts, including reports of massive bonuses, have been called into question by a former OpenAI researcher who joined the company.
2 Sources
Business and Economy
12 hrs ago
2 Sources
Business and Economy
12 hrs ago
The US plans to restrict AI chip exports to Malaysia and Thailand to prevent China from accessing advanced processors through intermediaries, as part of its "AI Diffusion" policy.
2 Sources
Policy and Regulation
4 hrs ago
2 Sources
Policy and Regulation
4 hrs ago