31 Sources
31 Sources
[1]
Trump's AI framework targets state laws, shifts child safety burden to parents | TechCrunch
The Trump administration on Friday laid out a legislative framework for a singular policy for AI in the United States. The framework would centralize power in Washington by preempting state AI laws, potentially undercutting the recent surge of efforts from states to regulate the use and development of the technology. "This framework can only succeed if it is applied uniformly across the United States," reads a White House statement on the framework. "A patchwork of conflicting state laws would undermine American innovation and our ability to lead in the global AI race." The framework outlines seven key objectives that prioritize innovation and scaling AI, and proposes a centralized federal approach that would override stricter state-level regulations. It places significant responsibility on parents for issues like child safety, and lays out relatively soft, non-binding expectations for platform accountability. For example, it says Congress should require AI companies to implement features that "reduce the risks of sexual exploitation and harm to minors," but does not lay out any clear, enforceable requirements. Trump's framework comes three months after he signed an executive order directing federal agencies to challenge state AI laws. The order gave the Commerce Department 90 days to compile a list of "onerous" state AI laws, potentially risking states' eligibility for federal funds like broadband grants. The agency has yet to publish that list. The order also directed the administration to work with Congress on a uniform AI law. That vision is coming into focus, and it mirrors Trump's earlier AI strategy, which focused less on guardrails and more on promoting companies' growth. The new framework proposes a "minimally burdensome national standard," echoing the administration's broader push to "remove outdated or unnecessary barriers to innovation" and accelerate AI adoptions across industries. This is a pro-growth, light-touch regulatory approach championed by so-called "accelerationists," one of whom is White House AI czar and venture capitalist David Sacks. While the framework nods to federalism, the carve-outs for states are relatively narrow, preserving only their authority over general laws like fraud and child protection, zoning, and state use of AI. It draws a hard line against states regulating AI development itself, which it says is an "inherently interstate" issue tied to national security and foreign policy. The framework also seeks to prevent states from "penaliz[ing] AI developers for a third party's unlawful conduct involving their models" -- a key liability shield for developers. Missing from that framework are any gestures towards liability frameworks, independent oversight, or enforcement mechanisms for potential novel harms caused by AI. In effect, the framework would centralize AI policymaking in Washington while narrowing the space for states to act as early regulators of emerging risks. Critics say states are the sandboxes of democracy and have been quicker to pass laws around emerging risks. Notably, New York's RAISE Act and California's SB-53 seek to ensure large AI companies have and adhere to safety protocols that are publicly documented. "White House AI czar David Sacks continues to do the bidding of Big Tech at the expense of regular, hardworking Americans," said Brendan Steinhauser, CEO of The Alliance for Secure AI. "This federal AI framework seeks to prevent states from legislating on AI and provides no path to accountability for AI developers for the harms caused by their products." Many in the AI industry are celebrating this direction because it gives them broader liberties to "innovate" without the threat of regulation. "This framework is exactly what startups have been asking for: a clear national standard so they can build fast and scale," Teresa Carlson, president of General Catalyst Institute, told TechCrunch. "Founders shouldn't have to navigate a patchwork of conflicting state AI laws that impede innovation." The framework was issued at a moment when child safety has emerged as a central flashpoint in the debate over AI. Certain states have moved aggressively to pass laws aimed at protecting minors and placing more responsibility on tech companies. The administration's proposal points in a different direction, placing greater emphasis on parental control than platform accountability. "Parents are best equipped to manage their children's digital environment and upbringing," the framework reads. "The Administration is calling on Congress to give parents tools to effectively do that, such as account controls to protect their children's privacy and manage their device use." The framework also says the administration "believes" that AI platforms should "implement features to reduce potential sexual exploitation of children and encouragement of self-harm." While it calls on Congress to require such safeguards, and affirms that existing laws, including those banning child sexual abuse materials, should apply to AI systems, the proposal employs qualifiers like "commercially reasonable," and stops short of laying out clear prerequisites. On the topic of copyright, the framework attempts to find a middle ground between protecting creators and allowing AI systems to be trained on existing works, citing the need for "fair use." That kind of language mirrors arguments AI companies have made as they face a growing number of copyright lawsuits over their training data. The main guardrails Trump's AI framework seems to outline involve ensuring "AI can pursue truth and accuracy without limitation." Specifically, it focuses on preventing government-driven censorship, rather than platform moderation itself. "Congress should prevent the United States government from coercing technology providers, including AI providers, to ban, compel, or alter content based on partisan or ideological agendas," the framework reads. It also instructs Congress to provide a way for Americans to seek legal redress against government agencies that seek to censor expression on AI platforms or dictate information provided by an AI platform. The framework comes as Anthropic is suing the government for allegedly infringing on its First Amendment rights after the Defense Department labeled it a supply chain risk. Anthropic argues that the DoD is designating it as such in retaliation for not allowing the military to use its AI products for mass surveillance of Americans, and for making targeting and firing decisions in autonomous lethal weapons. Trump has referred to Anthropic and its CEO Dario Amodei as "woke" and a "radical" leftist. The framework's language, which emphasizes protecting "lawful political expression or dissent," seems to build on Trump's earlier Executive Order targeting so-called "woke AI," which pushed federal agencies to adopt systems deemed ideologically neutral. It's unclear what qualifies as censorship versus standard content moderation, so such language could make it difficult for regulators to coordinate with platforms on issues like misinformation, election interference, or public safety risks. Samir Jain, vice president of policy at the Center for Democracy and Technology, pointed out: "[The framework] rightly says that the government should not coerce AI companies to ban or alter content based on 'partisan or ideological agendas,' yet the Administration's 'woke AI' Executive Order this summer does exactly that."
[2]
Trump Outlines New AI Regulation Plan: What's in It and What's Missing
Expertise Artificial intelligence, home energy, heating and cooling, home technology. The White House's new policy framework for regulating generative artificial intelligence, released Friday, covers many areas, but one thing is clear: President Donald Trump wants the federal government to set the rules. And those rules appear to fall far short of what consumer and privacy advocates argue is necessary. The generative AI revolution has been underway for years, and US legislation is slow to catch up. This is despite the growing awareness of AI's harms and challenges: chatbots' dangerous impacts on mental health and child development, the widespread legal wrangling over the copyright protections, the dangerous spread of deepfakes and AI-powered scams, to name a few. Sen. Marsha Blackburn introduced the new policy package, called The Trump America AI Act, in Congress on Thursday. The Tennessee Republican's bill is an attempt to codify a vision based on Trump's 2025 AI Action Plan, while delving into more legal specifics and providing guidance on implementing new laws (or changing existing ones). Trump has maintained that the federal government should be responsible for regulating the AI industry -- and that requiring AI companies to comply with 50 different sets of state laws would prevent the US from "winning" the global AI race. However, a proposal to temporarily ban states from regulating AI failed back in July, when it was removed at the last minute from the massive budget bill, known as the "One Big Beautiful Bill Act." Now, the White House is doubling down on its claim to be in charge, with a few exceptions. The plan addresses some of the biggest concerns people have about AI: job loss, copyright chaos for creators, rapidly expanding infrastructure such as data centers and the protection of vulnerable groups like children. But critics say it doesn't go far enough to regulate the fast-growing AI industry. "It is light on protection and heavy on promotion of dangerous AI systems," Alan Butler, president and executive director of the Electronic Privacy Information Center, said in a statement. "The American people deserve better, and Congress should do better than this." The White House's 2026 AI proposal says Congress should not create a new governing body to oversee AI rules, but should let existing agencies and subject-matter experts regulate as they see fit. Protecting children: This is one area where the federal government won't prevent states from creating laws. And many state governments are already leading the charge, especially in regulating romantic or companion chatbots. The plan highlights protecting kids from AI-powered deepfakes, a huge issue highlighted in AI creating child sexual abuse material. Shielding young people from the ill effects of AI is an ongoing battle, with several high-profile cases of teenagers using AI for self-harm and suicide. Blackburn's policy plan includes general language related to kids' online safety. Existing bills like the Kids Online Safety Act and the Children's Online Privacy Protection Rule are, theoretically, designed to protect kids, but advocates and tech experts say they could create a chilling effect on free speech and lead to censorship. Though Trump's AI framework addresses censorship, it's limited to preventing AI companies from including ideological or partisan bias in their products. Trump has previously railed against what he calls "woke" AI, a term the president and his allies have used to attack concepts like diversity, equity and inclusion. Job loss: It's not just translators and data entry folks who are worried about losing their jobs to AI -- legacy tech workers like coders and engineers are, too. There have been a lot of concerns about AI disrupting the workforce, with retail giants like Amazon laying off thousands of employees in the name of AI efficiency. The White House says it should use "nonregulatory" methods to focus on youth development and AI workforce training. Infrastructure: In line with Trump's previous AI Action Plan, the framework calls for states and local governments to streamline data center construction and operation. These facilities are increasingly controversial, with nearby residents reporting environmental damage and strain on their existing electrical grids, creating higher electric bills. Several big tech companies recently agreed to foot the bill for any higher electricity costs, but there's no way to enforce the voluntary pledge. Copyright: Whether the use of copyrighted materials in AI training is fair use or copyright infringement is one of the biggest legal issues of the AI age. The plan reiterates the administration's position that AI companies are covered by fair use -- meaning they wouldn't have to obtain permission or pay for copyrighted content when creating their models. But, given the ever-growing number of lawsuits asking the judiciary the same question, the federal government should allow those cases to play out. So far, limited cases with Anthropic and Meta have carved out narrow victories for tech companies, not authors. The framework document hints that the federal government could become a future licensing partner for AI companies, stating that it should "provide resources to make federal datasets accessible to industry and academia in AI-ready formats for use in training AI models and systems." (Disclosure: Ziff Davis, CNET's parent company, in 2025 filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.) Tech industry groups praised the administration's proposals, while consumer advocacy groups offered skepticism at best. In a statement backing the plan, the Consumer Technology Association supported a single set of rules for the entire country. "AI can and will make us better, and we agree that children need special protection, First Amendment rights are paramount, harmful deep fakes should be regulated, and Congress should not act to restrict AI platforms from relying on fair use protection," the tech industry trade group said. But according to Samir Jain, vice president of policy at the Center for Democracy and Technology, the government's playbook is rife with internal contradictions. While it calls for the federal government to preempt state rules and laws on AI development, it also says the federal government shouldn't undermine state authority. "The White House's high-level AI framework contains some sound statements of principles, but its usefulness to lawmakers is limited by its internal contradictions and failure to grapple with key tensions between various approaches to important topics like kids' online safety," Jain said in a statement. Ben Winters, director of AI and data privacy at the Consumer Federation of America, said the proposal prioritizes Big Tech over consumers. "It's encouraging to see some stated desires to protect people from AI-generated scams and data abuse of minors, but it's not enough," Winters said in a statement. "We need to see money where their mouth is on the protections -- more money for consumer protection agencies at both the federal and state levels. So far, they've done nothing but cut and hamstring them."
[3]
Trump takes another shot at dismantling state AI regulation
The Trump administration on Friday unveiled its new legislative blueprint for AI regulation, and the seven-point plan includes a clear message: The federal government should avoid many AI regulations beyond a set of child safety rules, and it should bar states from messing with the "national strategy to achieve global AI dominance." The plan advises Congress to protect minors using AI services with more safeguards and take action to attempt to prevent electricity costs from spiking due to AI infrastructure. It encourages "youth development and skills training" to boost familiarity with AI tools, without much further detail. But it suggests taking a wait-and-see approach to whether training AI models on copyrighted material without permission is legal, and it maintains a long-running Republican push to limit whether states can enact their own AI laws. The entire document and all its provisions, however, will only take effect if Congress adopts them into legislation and passes them into law. The Trump administration blueprint encourages passing laws similar to the Take It Down Act -- which was signed into law in May 2025 and bars nonconsensual AI-generated "intimate visual depictions," requiring certain platforms to rapidly remove them. The document also is pro-age verification, suggesting that Congress "establish commercially reasonable, privacy protective, age assurance requirements (such as parental attestation) for AI platforms and services likely to be accessed by minors." Age-gating is controversial from a privacy standpoint and has a lot of potential surveillance implications. It proposes other child protection measures like limiting the ability for AI models to train on minors' data and limits to targeted advertising based on their data. (The document does not seek to prohibit those practices for children's data, just limit them.) At the same time, it states that Congress "should avoid setting ambiguous standards about permissible content, or open-ended liability, that could give rise to excessive litigation." In the age of deepfakes, when AI-generated videos are looking more real than ever and a fake video of a politician can instantly propagate global conspiracy theories, the new policy blueprint seeks to "consider establishing a federal framework protecting individuals from the unauthorized distribution or commercial use of AI-generated digital replicas of their voice, likeness, or other identifiable attributes." (That could mean finally creating a federal likeness law.) But it also says lawmakers should provide "clear exceptions" for parody, news reporting, satire, and other First Amendment-protected use cases. The blueprint also discourages Congress from taking up AI copyright issues. "Although the Administration believes that training of AI models on copyrighted material does not violate copyright laws, it acknowledges arguments to the contrary exist and therefore supports allowing the Courts to resolve this issue," it says. "Congress should not take any actions that would impact the judiciary's resolution of whether training on copyrighted material constitutes fair use." In another section, the blueprint raises concerns about large-scale scams and fraud that are increasingly powered by AI, stating that Congress should "augment existing law enforcement efforts to combat AI-enabled impersonation scams and fraud that target vulnerable populations such as seniors," although no extra details are provided. The Trump administration continued leaning into the pro-federal, anti-state approach to AI regulation that it's been promoting (so far unsuccessfully) for nearly a year. The blueprint says Congress should "preempt state AI laws that impose undue burdens" and avoid "fifty discordant" standards for companies, adding that states "should not be permitted to regulate AI development, because it is an inherently interstate phenomenon with key foreign policy and national security implications." Other legal protections for AI companies were baked in, too, such as the idea that states shouldn't be allowed to "penalize AI developers for a third party's unlawful conduct involving their models." But in the child-privacy section, the document does allow states some limited wiggle room, stating that Congress shouldn't preempt states from "enforcing their own generally applicable laws protecting children, such as prohibitions on child sexual abuse material, even where such material is generated by AI." The allowance comes after numerous figures from both parties expressed concern about overturning local child safety laws, including nearly 40 attorneys general for US states and territories. The overall goal, as in earlier Trump administration proposals, is speeding AI development. "The United States must lead the world in AI by removing barriers to innovation [and] accelerating deployment of AI applications across sectors," the document states, adding that Congress should find ways to make federal datasets available to AI companies and academics in "AI-ready formats for use in training AI models and systems." It didn't specify which types of federal datasets it sought to make publicly available for AI training. The plan also definitively answers a long-asked question in AI regulation -- whether there should be one federal body responsible for AI regulation or whether AI regulation should be left to each sector -- and says that Congress "should not create any new federal rulemaking body to regulate AI"; instead, it says, it will "support development and deployment of sector-specific AI applications through existing regulatory bodies with subject matter expertise." President Trump signed an executive order last July seeking to prevent "woke AI" by banning government agencies from using models that "incorporated" topics like systemic racism. He recently ordered all agencies to blacklist the "Radical Left AI company" Anthropic for setting limits on military use of its models, something Anthropic alleges violates its First Amendment rights. At the same time, the blueprint states that the government "must defend free speech and First Amendment protections, while preventing AI systems from being used to silence or censor lawful political expression or dissent." It goes further to say that Congress should explicitly prevent the government from "coercing" AI providers "to ban, compel, or alter content based on partisan or ideological agendas" -- and that in the event that government agencies censor expression on AI platforms or dictate the information they provide, then Congress should provide a way for Americans to "seek redress." Last month, we saw the first bipartisan effort to address higher utility bills in communities with data centers nearby, and the new AI policy framework seems to address those concerns on both sides of the aisle, saying that Congress should find ways to make sure that "residential ratepayers do not experience increased electricity costs as a result of new AI data center construction and operation." But, it says, Congress should streamline federal permits for data center construction and operation, making it easier for AI companies to and make it easier for "develop or procure on-site and behind-the-meter power generation" -- meaning that data center construction should still be full-speed-ahead, but community members shouldn't have to literally pay the price on their monthly bills.
[4]
White House Univeils AI Legislative Plan for Skeptical Congress
President Donald Trump released a national framework for regulating artificial intelligence on Friday, laying the groundwork for Congress to create a federal standard for the rapidly growing technology. The framework, which builds upon Trump's December executive order, calls for online safeguards for children, less stringent permitting requirements so data centers can generate power on site and preventing censorship. The latter provision is meant to address allegations by conservatives that technology companies are biased against their views, which the firms have denied. It also calls for intellectual property rights protections, removing "outdated barriers to innovation" and expanding AI workforce training. It's unclear whether the White House proposal will muster enough support on Capitol Hill, where mandates on tech companies have divided Republicans. The framework mirrors much of a draft measureBloomberg Terminal released by Senator Marsha Blackburn, a Tennessee Republican. Her plan also calls for protecting consumers from electricity price spikes. Trump has pushed tech giants, including Amazon.com Inc., Meta Platforms Inc., Microsoft Corp. and Google parent Alphabet Inc., to work with the federal government to ensure corporations cover the cost of power they use for AI initiatives. Such legislation would need the support of Democrats to pass the Senate. That would require political compromise ahead of the November midterm elections, in which Democrats are optimistic about taking control of Congress and therefore may be reluctant to strike a deal with Republicans. Earlier: Trump Signs Order Seeking to Limit State-Level AI Regulation Follow the latest in global politics. Follow the latest in global politics. Follow the latest in global politics. Get insights from reporters around the world in the Balance of Power newsletter. Get insights from reporters around the world in the Balance of Power newsletter. Get insights from reporters around the world in the Balance of Power newsletter. Bloomberg may send me offers and promotions. Plus Signed UpPlus Sign UpPlus Sign Up By submitting my information, I agree to the Privacy Policy and Terms of Service. AI stands to be a divisive issue in the midterms, with tech executives and companies pouring hundreds of millions of dollars into races to elect friendly members of Congress. But the technology faces backlash from some voters concerned about the rapid development of data centers in their communities, the electricity use and environmental costs of those centers, potential job losses from AI and possible new vulnerabilities for their personal information. Washington has struggled to regulate emerging technologies for decades, with the tools developing at a far more rapid pace than lawmakers can pass legislation. Trump has used executive authority to cajole tech and AI firms to ensure that the US has enough power to fuel energy-hungry data centers. He has championed the use of coal-fired plants, along with natural gas and nuclear power to help fuel the boom. He has taken an interest in establishing what the White House has described as "American dominance" in AI. The president unveiled a White House action plan last July for US technology manufacturing -- including of high-powered semiconductors necessary for AI. He's also enacted security measures to ensure that competitors such as China do not gain an edge. As AI development has expanded, states have moved to pass their own rules intended to mitigate threats posed by the emerging technology, such as algorithmic discrimination and unauthorized deepfakes. The White House, with the support of the tech companies, has sought to preempt the patchwork of state-by-state laws that have emerged in the absence of national AI regulation, arguing that local measures have become excessive and stymie growth.
[5]
Donald Trump urges narrow AI regulation amid fierce Maga backlash
Donald Trump's administration has urged Congress to pass narrow child safety and content laws to rein in AI, amid a fierce backlash against the technology from within the Maga coalition. The unexpected move by the White House comes just days after Republican senator Marsha Blackburn released a draft of a more sweeping bill, which she claimed had the president's backing. It would allow AI companies to be sued for certain harms. The White House proposal calls for greater parental control over their children's privacy and content setting in AI apps, as well as age verification, but warns against new state laws and urges "industry-led standards" instead of any new federal watchdog for the sector. Trump, whose AI policy has been shaped by officials with close links to Silicon Valley, including venture capitalist turned White House adviser David Sacks, has favoured light-touch regulation of the industry. But the president has struggled to convince large parts of his own party to fall in line. The administration last year twice tried to legislate to ban state-level AI regulations, but the measures failed amid opposition from Republican senators and governors. Trump instead signed an executive order that threatened to withhold funding from states that continued to pass "onerous" AI laws. A series of recent polls has shown that concern about data centres and the societal impact of AI is widespread among Trump voters. Republican legislators across the country have also been defying the president by introducing dozens of state-level bills to regulate the tech. The four-page framework released on Friday focused on "protecting our children online, shielding families from higher energy costs, respecting creators' rights and supporting American workers", said Michael Kratsios, director of the White House's Office of Science and Technology Policy. The framework mandates Congress to "empower parents and guardians with robust tools to manage their children's privacy settings, screen time, content exposure and account controls". It also calls for AI labs to establish "commercially reasonable" age verification. AI labs have faced legal challenges from copyright holders, such as authors, for hoovering up content created by humans to train their models. The Trump proposal backs the training of models with this content, but suggests Congress considers frameworks for copyright holders to "collectively negotiate compensation from AI providers". But it explicitly warns against creating "any new federal rulemaking body to regulate AI", and says existing government watchdogs should oversee the sector "through industry-led standards". The framework was welcomed by lobbyists for the AI industry, who are fighting to stop a proliferation of state-level laws, but was criticised by some child-safety campaigners for offering inadequate safeguards. Mackenzie Arnold, director of US policy at the Institute for Law & AI, said the framework was "clearer on what it doesn't want than on what it does". He added that he was concerned that "the framework continues to treat governance and innovation as competing aims".
[6]
White House releases national AI framework
WASHINGTON, March 20 (Reuters) - The White House released a framework on artificial intelligence on Friday that aims to ensure protections for children, communities and small businesses as part of a national plan to regulate developments in the field. The Trump administration has been pushing for a single legislative framework that can be applied uniformly across the country, rather than leaving states to form their own plans. "The administration looks forward to working with Congress in the coming months to turn this framework into legislation that the president can sign," the White House said in a statement. Reporting by Katharine Jackson and Doina Chiacu; Editing by Katharine Jackson Our Standards: The Thomson Reuters Trust Principles., opens new tab
[7]
The White House proposes new AI policy framework that supersedes state laws
The White House has announced a new AI policy framework that calls for Congress to craft federal regulation that overrules state AI laws. The Trump administration has made multiple attempts to overrule more restrictive state-level AI regulation, but has failed so far, most notably in the passing of the "One Big Beautiful Bill." The framework focuses on a variety of topics, covering everything from child privacy to the use of AI in the workforce. "Importantly, this framework can succeed only if it is applied uniformly across the United States," The White House writes. "A patchwork of conflicting state laws would undermine American innovation and our ability to lead in the global AI race." In terms of child privacy protections, the framework calls for Congress to require tools like "screen time, content exposure and account controls" while also affirming that "existing child privacy protections apply to AI systems," including limits on how data is collected and used for AI training. The framework also calls for a carveout that allows states to enforce "their own generally applicable laws protecting children, such as prohibitions on child sexual abuse material, even where such material is generated by AI."
[8]
White House releases AI policy framework for Congress, with six guiding principles
WASHINGTON (AP) -- The White House on Friday released its framework for how it wishes Congress will address the issue of artificial intelligence. The legislative blueprint, released on its website, outlines a half-dozen guiding principles for lawmakers to keep in mind when developing policies governing artificial intelligence. Those areas include: protecting children and empowering parents; safeguarding and strengthening American communities; respecting intellectual property rights, preventing censorship and protecting free speech, enabling innovation and ensuring American AI dominance, and educating Americans and developing an AI-ready workforce. "The Trump Administration is committed to winning the AI race to usher in a new era of human flourishing, economic competitiveness, and national security for the American people," the White House said in announcing its framework. "Achieving these goals requires a commonsense national policy framework that both enables American industry to innovate and thrive and ensures that all Americans benefit from this technological revolution." The White House said "strong federal leadership" is needed to make sure the public can trust how artificial intelligence is being used in their lives. Members of Congress from both parties, as well as civil liberties and consumer rights groups, have pushed for more regulations on AI, saying there is not enough oversight for the powerful technology. President Donald Trump signed an executive order in December to block states from crafting their own regulations, arguing that a patchwork of rules would hurt growth in the sector.
[9]
Trump admin. unveils national AI policy framework to limit state power
U.S. President Donald Trump delivers remarks on artificial intelligence at the "Winning the AI Race" Summit in Washington D.C., U.S., July 23, 2025. The Trump administration on Friday issued a legislative framework for a single national policy on artificial intelligence, aiming to create uniform safety and security guardrails around the nascent technology while preempting states from enacting their own AI rules. The six-pronged outline broadly proposes a slew of regulations on AI products and infrastructure, ranging from implementing new child-safety rules to standardizing the permitting and energy use of AI data centers. It also calls on Congress to address thorny issues surrounding intellectual-property rights and craft rules "preventing AI systems from being used to silence or censor lawful political expression or dissent." The administration said in an official release that it wants to work with Congress "in the coming months" to convert its framework into a bill that President Donald Trump can sign. The White House wants to codify the framework into law this year" and believes it can generate bipartisan support, Michael Kratsios, director of White House Office of Science and Technology Policy, said in an interview with Fox News on Thursday evening. That won't be easy in a deeply divided Congress where Republicans hold thin and often fractious majorities, and where Trump has already urged GOP lawmakers to prioritize his controversial voter-ID bill above all else ahead of the November midterms. The Senate has spent much of this week debating the SAVE America Act even though it doesn't have the votes to clear the chamber. Amid rapidly growing concerns about AI and its impacts, lawmakers in New York, California and elsewhere have pushed to enact their own state-level regulations. AI industry leaders have strongly opposed those efforts, arguing that a "patchwork" of laws would hobble innovation and give global competitors like China a major advantage in the race for AI dominance.
[10]
White House Unveils A.I. Policy Aimed at Blocking State Laws
The Trump administration on Friday released new guidelines for federal legislation on the technology, recommending some safeguards for children and consumer protections for energy costs. The White House on Friday released policy guidelines that called for blocking state laws regulating artificial intelligence, while also recommending some safeguards for children and consumer protections for energy costs. Dozens of states have passed laws in recent months to regulate A.I., which has created concerns about the technology's potential to steal jobs, push up energy prices and threaten national security. But President Trump has made clear U.S. companies should have mostly free rein in a global race to dominate the technology. On Friday, the White House called on Congress to pass federal A.I. legislation to override the state laws. Among the Trump administration's suggested measures, Congress would streamline the process for building data centers, the warehouses full of computers that power A.I. The framework also proposed guardrails to prevent the government from using the technology for censorship, as well as mandating A.I.-related work force training. "This framework can succeed only if it is applied uniformly across the United States," the White House said in its announcement. "A patchwork of conflicting state laws would undermine American innovation and our ability to lead in the global A.I. race." Meta, OpenAI, Google and other A.I. giants have argued that a patchwork of state laws could slow down their progress. The companies have repeatedly pointed to regulation as the biggest hindrance to the nation's success in leading the world in A.I. Some companies and their leaders have contributed to super PACs that are spending tens of millions of dollars aimed at blocking the election of candidates who favor A.I. regulation in the lead-up to the November midterm elections.
[11]
Trump Proposes a 'Light Touch' National Framework for AI Policy
The Trump administration has announced its long-awaited national policy framework for artificial intelligence, guidelines for Congress on how to regulate the emerging technology. While it was released in a three-page document, it probably could have fit on a Post-It Note. The framework offers some broad-stroke guidelines for lawmakers, encouraging Congress to implement laws to accomplish goals like protecting minors and combatting censorship. Those recommendations are in line with the type of tech industry-friendly policies that are already being pursued, which makes sense given how much money the big players in the space have spent lobbying and sucking up to the administration. For instance, Trump called on Congress to introduce "age assurance requirements" for AI, similar to proposed laws like the Kids Online Safety Act, which would implement similar standards on social media platforms. The framework also encourages Congress to establish ways for rights holders to license their material to AI companies for training models and reproductionâ€"though it states "Any such legislation, however, should not address when or whether such licensing is required," because the administration "believes that training of AI models on copyrighted material does not violate copyright laws." As expected, the administration called for its preferred laws to take precedence over states that have already passed more comprehensive laws governing AI. "Congress should preempt state AI laws that impose undue burdens to ensure a minimally burdensome national standard consistent with these recommendations, not fifty discordant ones," the framework reads, arguing that "Preemption must ensure that State laws do not govern areas better suited to the Federal Government or act contrary to the United States’ national strategy to achieve global AI dominance." Tucked in at the very end of the framework is a recommendation that reads like Section 230 for AI companies. "States should not be permitted to penalize AI developers for a third party’s unlawful conduct involving their models," it states. The inclusion of this from Trump is interesting, given his past disdain for Section 230 of the Communications Act, which spares sites like Reddit and Facebook from legal liability for things posted on their platforms. The idea that AI companies aren't responsible for the outputs of their models could potentially shield them from facing consequences for misinformation or outputs like non-consensual sexually explicit material, though the proposal from the Trump administration seems more focused on keeping states from carrying out enforcement actions than providing a blanket protection for the AI companies. Whether Trump's policy framework actually goes anywhere or not, time will tell. He previously backed a 10-year moratorium that would have prevented states from establishing their own AI laws, and that got roundly shot down by everyone, including most Republicans. This framework is likely to have more support, but it's far from a sure thing that it'll get picked up by his party's members of Congress, many of whom have their own policy proposals.
[12]
"This framework can succeed only if it is applied uniformly across the United States": White House rolls out national legislative AI framework that looks to trump state level rules
* National AI Legislative Framework focuses on protecting American citizens * Child safeguarding and protecting citizens are big pushes * States retain some governance The White House has proposed a single national AI framework in order to avoid what Trump has previously slated to be a "patchwork" of state laws, all in a bid to boost America's global dominance and competitiveness in the AI sector. The National AI Legislative Framework sets out a number of core principles, focusing heavily on child protection and safety. Trump's government also explained that the framework aims to tackle some elements that American citizens are most worried about - child safety was likened in importance to keeping a cap on monthly electricity bills. White House pushes for federal AI legislation over state-level "patchwork" "These issues, along with other emerging AI policy considerations, require strong Federal leadership to ensure the public's trust in how AI is developed and used in their daily lives," the announcement reads. "Overregulation by the States is threatening to undermine this Major Growth 'Engine'," Trump previously wrote in a Truth Social post. "We MUST have one Federal Standard instead of a patchwork of 50 State Regulatory Regimes," he added." The White House argues that the federal government is best place to set consistent national rules, however states will still have some control over fraud and consumer protection. Child safety is just one of six core principles that make up the framework, together with: safeguarding American communities, jobs and energy supply; respecting creators' IP; preventing censorship and promoting free speech; enabling technological innovation; and preparing Americans with AI education. Lawmakers already spoke up about this concern in late 2025, arguing that states are better positioned to react more quickly to emerging tech issues. The White House is now working with Congress to turn the framework into legislation. Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button! And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.
[13]
White House releases Trump's national AI plan and framework
Why it matters: The four-page framework calls on lawmakers to limit the ability of states to set their own rules for the technology, setting up a renewed clash with states and Congress over the future of AI regulation. What they're saying: "Congress should preempt state AI laws that impose undue burdens to ensure a minimally burdensome national standard consistent with these recommendations, not fifty discordant ones," the framework states. What's inside: The proposal calls on Congress to: * Address the use of AI replicas that simulate someone's likeness or voice. * Codify President Trump's pledge to require tech companies to pay for their increased energy demands. * Establish "regulatory sandboxes" to allow developers to experiment with AI under relaxed rules. It also focuses on kids' online safety: "AI services and platforms must take measures to protect children, while empowering parents to control their children's digital environment and upbringing," the framework states. What we're watching: This recommendation will shape Republican-led efforts on Capitol Hill, but disagreements over federal preemption, copyright and kids' safety remain the same sticking points that have stalled action for years.
[14]
The White House has a plan for AI regulation, and it starts with keeping states out of it | Fortune
The White House on Friday released its framework for how it wishes Congress will address the issue of artificial intelligence. The legislative blueprint, released on its website, outlines a half-dozen guiding principles for lawmakers to keep in mind when developing policies governing artificial intelligence. Those areas include: protecting children and empowering parents; safeguarding and strengthening American communities; respecting intellectual property rights, preventing censorship and protecting free speech, enabling innovation and ensuring American AI dominance, and educating Americans and developing an AI-ready workforce. "The Trump Administration is committed to winning the AI race to usher in a new era of human flourishing, economic competitiveness, and national security for the American people," the White House said in announcing its framework. "Achieving these goals requires a commonsense national policy framework that both enables American industry to innovate and thrive and ensures that all Americans benefit from this technological revolution." The White House said "strong federal leadership" is needed to make sure the public can trust how artificial intelligence is being used in their lives. Members of Congress from both parties, as well as civil liberties and consumer rights groups, have pushed for more regulations on AI, saying there is not enough oversight for the powerful technology. President Donald Trump signed an executive order in December to block states from crafting their own regulations, arguing that a patchwork of rules would hurt growth in the sector.
[15]
White House AI Proposal Seeks to Override State Laws, Avoid New Regulator - Decrypt
The plan also focuses on child safety, free speech, infrastructure, and copyright disputes. The White House on Friday released a sweeping national policy framework for artificial intelligence, outlining recommendations to Congress that would set national standards for AI while relying on existing federal agencies -- rather than creating a new regulator. The proposal comes as states move ahead with their own AI laws, which the Trump administration has criticized as a burdensome "patchwork" of requirements for companies. "The Trump Administration is committed to winning the AI race to usher in a new era of human flourishing, economic competitiveness, and national security for the American people," the White House said in a statement. "Achieving these goals requires a commonsense national policy framework that both enables American industry to innovate and thrive and ensures that all Americans benefit from this technological revolution." The framework urges Congress to set national AI rules that address child safety, innovation, free speech, and intellectual property, while preempting state laws it views as burdensome. It also says those federal standards should not override states' existing authority to enforce laws on issues like fraud, consumer protection, and child sexual abuse material. The Center for Democracy and Technology said the proposal includes "some sound statements of principles," but does not resolve competing priorities. "Its usefulness to lawmakers is limited by its internal contradictions and failure to grapple with key tensions between various approaches to important topics like kids' online safety," CDT Vice President of Policy Samir Jain said in a statement shared with Decrypt. Jain also said the framework contradicted the White House's own position on government influence over AI platforms. "It rightly says that the government should not coerce AI companies to ban or alter content based on 'partisan or ideological agendas,' yet the administration's 'woke AI' executive order does exactly that," he said. The framework follows earlier efforts by the Trump administration to curb state-level AI regulation. In November, a draft executive order outlined steps to challenge state laws and restrict funding to those that enacted laws that were seen as contradictory to the order. Despite the administration's attempts to set a federal standard, states have continued to pass their own measures. In October, California enacted SB 243, which would require AI companion chatbots to identify themselves and restrict certain interactions with minors while imposing disclosure rules on large developers. The White House's framework also said parents should be given more control over how children interact with AI systems, and that Congress should enact better protections against abuse. "The administration is calling on Congress to give parents tools to effectively do that, such as account controls to protect their children's privacy and manage their device use," the White House said. "The administration also believes that AI platforms likely to be accessed by minors should implement features to reduce potential sexual exploitation of children or encouragement of self-harm." The administration also said that while it views AI training on copyrighted material as lawful, it believes courts should decide the issue, adding that Congress "should not take any actions that would impact the judiciary's resolution of whether training on copyrighted material constitutes fair use." The proposal also calls for a federal law to protect individuals from unauthorized AI-generated deepfakes, expanding on a bipartisan law signed by Trump last year that made non-consensual intimate images and deepfake porn a federal crime. The new framework, however, comes with exceptions for parody, satire, news reporting, and "other expressive works protected by the First Amendment." The plan ties AI policy to infrastructure and economic goals, including faster permitting for data centers and ensuring residential electricity costs do not rise as a result of AI infrastructure buildout under a proposed "Ratepayer Protection Pledge." It also calls for expanded use of on-site and behind-the-meter power generation to support data center development and improve grid reliability, along with incentives to expand AI adoption and access to federal datasets. Consumer advocacy group Public Citizen called the proposal "a national framework to protect Big Tech at the expense of everyday Americans." "It is an extraordinary payback to the Big Tech companies that have lined up to throw pocket change at Trump's inauguration, and for his ballroom, and for the Melania movie, and to settle bad faith lawsuits and more," co-president Robert Weissman said in a statement shared with Decrypt. Weissman said the focus on preempting state laws could leave gaps in oversight, arguing that without new federal standards, limiting state action would reduce regulation. He pointed to ongoing state efforts addressing issues such as deepfakes, AI companions, and algorithmic decision-making. "This is a disgraceful proposal that, happily, will be dead on arrival in Congress," Weissman said. "It does, however, show yet again that Donald Trump aligns his interests with the biggest corporations and the billionaire class, not those of the American people."
[16]
White House releases AI legislation framework
The White House released a new framework for national AI legislation Friday morning, focusing on protections for children and boosting America's AI industry while calling for sharp limits on state laws that it says would slow down AI development and legal liability for AI developers. The legislative proposal emphasizes the need for Congress to establish a unifying federal approach to AI rather than let states set individual rules that it says could hamper AI innovation, a position the White House has repeatedly signalled over the past months. Politicians and activists across the political spectrum have instead advocated for states' ability to regulate AI in the absence of meaningful federal action, as Congress debates how to regulate the fast-moving technology. "The Federal government is uniquely positioned to set a consistent national policy that enables us to win the AI race and deliver its benefits to the American people," the White House said in an announcement accompanying the framework's release, "while effectively addressing the policy challenges that accompany this transformative technology. The Administration looks forward to working with Congress in the coming months to turn this framework into legislation that the President can sign." The framework is split into seven main areas, from "Protecting Children and Empowering Parents" and "Respecting Intellectual Property Rights and Supporting Creators" to "Educating Americans and Developing an AI-Ready Workforce." Several of the framework's provisions, including the focus on child protections and support for building American AI infrastructure, were previewed in President Donald Trump's executive order from December. That order directed David Sacks, the White House's AI czar, and Michael Kratsios, Director of the Office of Science and Technology Policy, to create Friday's draft framework. The framework supports limiting the liability of America's AI developers due to harms from AI systems, particularly railing against "open-ended liability" which "could give rise to excessive litigation" for issues related to child safety. The framework also advances limitations on states' ability to "penalize AI developers for a third party's unlawful conduct involving their models." These proposed restrictions on liability align with messaging from Sacks, a venture capitalist, and many leading Silicon Valley investors claiming that significant liability provisions would harm American AI innovation and scare away future investment. The need to regulate America's booming AI industry has quickly become a uniting factor for MAGA conservatives and progressive activists. In recent months, slowing the spread and construction of data centers has become a key bipartisan issue in many state capitols.
[17]
White House eyes Friday rollout for AI framework
Why it matters: Republicans are looking to the White House for direction on AI, but its plan is likely to run into the same sticking points that have stalled action for years. * Those include how to protect children online and whether to preempt state laws that conflict with the federal standards they're trying to set. * Pressure is mounting for Congress to act as states move ahead with laws that AI companies are increasingly comfortable living with. What's inside: The White House is eyeing Friday to announce a legislative framework for federal AI rules, multiple sources familiar with the matter told Axios. * In addition to preemption, the framework is expected to cover child safety, communities, creators and censorship -- "the four C's" outlined by White House AI czar David Sacks. The White House has been working with Hill leadership on plans. The House Energy and Commerce and Senate Commerce Committees would have primary jurisdiction on any AI proposal. * Asked about involvement in the effort, committee spokesman Matt VanHyfte pointed Axios to an essay Chair Brett Guthrie (R-Ky.) wrote earlier this year outlining his key pillars to AI leadership: "dominance, deployment and safeguards." * "We're excited to see what the White House releases, and wouldn't be surprised to see if it lines up with what Chairman Guthrie believes," VanHyfte said. "E&C is the tip of the spear when it comes to AI regulation in the House." * Blair Taylor, a spokesperson for Senate Commerce, told Axios that "we look forward to working with the White House and members of the Committee to advance meaningful AI legislation that encompasses a number of priorities, like those outlined in the Cruz AI framework." The big picture: The White House is trying to pair a national AI framework that would preempt state laws with a slate of kids' online safety bills that have bipartisan interest. * But the House and Senate remain far apart on the details of those proposals, making any package a tough lift. * The White House did not immediately respond to requests for comment. Friction point: The latest package of kids' safety bills that the House Energy and Commerce Committee advanced included a version of the Kids Online Safety Act that doesn't pass muster in the Senate. * The House version of the bill omits a "duty of care" that would require platforms to take reasonable steps to mitigate harms stemming from design features -- a provision senators in both parties have insisted on. * Sen. Marsha Blackburn (R-Tenn.) on Wednesday released a discussion draft of the TRUMP AI Act, which rolls together a number of Senate proposals, including her version of the children's online safety bill, and would codify many parts of Trump's executive orders on AI. The intrigue: Some major AI companies are now signaling that they are more comfortable with a patchwork of state-by-state laws in the face of congressional inaction, as long as they start to align. * OpenAI's Chris Lehane wrote in a blog this week that "in the absence of a national framework, states should align around the emerging model in California and New York." * Google president of global affairs Kent Walker told Axios in an interview this week that state coordination on AI laws is welcomed and California's SB53 and New York's RAISE Act are "manageable frameworks." The bottom line: The pressure is on for politicians to look like they're taking meaningful steps toward regulating AI ahead of the midterms.
[18]
How Trump's AI plan to override state laws could undercut key safeguards
Writing on X, White House "AI czar" David Sacks said the administration is responding to what it sees as a fragmented landscape of state-level rules, warning that a "patchwork" of regulations could slow innovation and undermine U.S. competitiveness in AI. But getting Congress to agree on sweeping AI legislation in an election year is a tall order, particularly as the industry's massive data center buildout has become a flashpoint for lawmakers on both sides of the aisle. Fast Company spoke with Mina Narayanan, an AI safety and governance research analyst at Georgetown University's Center for Security and Emerging Technology, about the details of the White House's framework and its potential implications.
[19]
Monday Morning Moan - Big Govt props up Big Tech. Why Trump 2.0's Federal landgrab is a regulatory win for the AI Bros, but a loss for society as a whole
Well, the AI vendors will be pleased at any rate. The White House has published its proposed policy framework for regulating the Artificial Intelligence sector and it looks set to put the US Federal Government on a collision course with States, much of the rest of the world, most definitely the European Union. The Administration recognizes that some Americans feel uncertain about how this transformative technology will affect issues they care about, like their children's wellbeing or their monthly electricity bill. These issues, along with other emerging AI policy considerations, require strong Federal leadership to ensure the public's trust in how AI is developed and used in their daily lives. There are six core principles in the proposed framework, emphasizing emotive priorities, such as parental responsibility taking priority over Big Government and protecting US Freedom of Speech, ticking boxes clearly intended to appeal to the MAGA heartland. The six objective are defined as: Protecting Children and Empowering Parents: Parents are best equipped to manage their children's digital environment and upbringing. The Administration is calling on Congress to give parents tools to effectively do that, such as account controls to protect their children's privacy and manage their device use. The Administration also believes that AI platforms likely to be accessed by minors should implement features to reduce potential sexual exploitation of children or encouragement of self-harm. Safeguarding and Strengthening American Communities: AI development should strengthen American communities and small businesses through economic growth and energy dominance. The Administration believes that ratepayers should not foot the bill for data centers, and is calling on Congress to streamline permitting so that data centers can generate power on site, enhancing grid reliability. Congress should also augment Federal government ability to combat AI-enabled scams and address AI national security concerns. Respecting Intellectual Property Rights and Supporting Creators: The creative works and unique identities of American innovators, creators, and publishers must be respected in the age of AI. Yet, for AI to improve it must be able to make fair use of what it learns from the world it inhabits. The Administration is proposing an approach that achieves both of these objectives, enabling AI to thrive while ensuring Americans' creativity continues propelling our country's greatness. Preventing Censorship and Protecting Free Speech: The Federal government must defend free speech and First Amendment protections, while preventing AI systems from being used to silence or censor lawful political expression or dissent. AI cannot become a vehicle for government to dictate right and wrong-think. The Administration is proposing guardrails to ensure that AI can pursue truth and accuracy without limitation. Enabling Innovation and Ensuring American AI Dominance: The Administration is calling on Congress to take steps to remove outdated or unnecessary barriers to innovation, accelerate the deployment of AI across industry sectors, and facilitate broad access to the testing environments needed to build and deploy world-class AI systems. Educating Americans and Developing an AI-Ready Workforce: The Administration wants American workers to participate in and reap the rewards of AI-driven growth, encouraging Congress to further workforce development and skills training programs, expanding opportunities across sectors and creating new jobs in an AI-powered economy. But despite the anti-Big Government positioning of the current Administration, a key plank of the policy framework is a power-grab to ensure that the Federal authorities takes charge of the regulatory climate and that attempts at ground level by individual US States to take their own actions are blocked. The framework claims: Importantly, this framework can succeed only if it is applied uniformly across the United States. A patchwork of conflicting state laws would undermine American innovation and our ability to lead in the global AI race. The Federal government is uniquely positioned to set a consistent national policy that enables us to win the AI race and deliver its benefits to the American people, while effectively addressing the policy challenges that accompany this transformative technology. This shouldn't come as any surprise - Trump 2.0 officials have regularly spoken out against the idea of individual states, such as California, putting in place legislative and regulatory regimes to govern tech firms. Companies should not have to navigate multiple legal frameworks around the country, is the basic argument: The Federal government must establish a Federal AI policy framework to protect American rights, support innovation, and prevent a fragmented patchwork of state regulations that would hinder our national competitiveness, while respecting federalism and State rights. Congress should pre-empt state AI laws that impose undue burdens to ensure a minimally burdensome national standard consistent with these recommendations, not fifty discordant ones. That's not in its own right unreasonable - although ironically is the kind of centralist command-and-control mindset that Trump 2.0 ferociously criticizes the EU for - but could only work in practice if there is a functional Federal-level authority to manage the fast-expanding AI industry. But having declared that States may not go their own way and have to leave matters to Washington, the Framework also insists: Congress should not create any new Federal rule-making body to regulate AI. So who should manage this? It seems that AI firms should be left to 'mark their own homework'! Trump 2.0's plan is that the Federal Government: ...should instead support development and deployment of sector-specific AI applications through existing regulatory bodies with subject matter expertise and through industry-led standards. To which critics will immediately point out that this has been tried before, such as with the big tobacco lobby where cigarette firms were left to self-regulate and for years got away with denying the link between their products and cancer, for example. Or we can look to the airline industry where carriers have previously been allowed to self-certify the safety of craft, all good until things start dropping out of the sky! So, what we're looking at here is the likes of OpenAI checking its own work and coming back to report that there's nothing to see here, all OK, carry on as you were. That's fine if you assume that such as Sam Altman, Elon Musk, Alex Karp, yada yada yada, can be relied upon to act in the best interests of society as a whole, not just the theoretical best for his firm or for pushing AI into ever more hitherto uncolonized areas of human life, essentially uncontested. SexyGPT, here we come. As for AI launching World War III.... Just in case, US States do decide to fight against this - they will! - the proposed Federal framework seeks to stamp out such defiance, explicitly stating that Congress needs to legislate against this outcome: States should not be permitted to regulate AI development, because it is an inherently interstate phenomenon with key foreign policy and national security implications. States should not unduly burden Americans' use of AI for activity that would be lawful if performed without AI. No States should not be permitted to penalize AI developers for a third party's unlawful conduct involving their models. Meanwhile on the subject of copyright infringement, where AI model developers have run rampant through other people's intellectual property and knowledge bases to train up their models, Trump 2.0 comes down again on the side of the vendor lobby. While the US courts start to work their way through multiple copyright cases and doling out massive penalties for theft to AI vendors and the EU, among others, takes action to protect content creators rights, Trump 2.0 sets a course in the opposite direction: The Administration believes that training of AI models on copyrighted material does not violate copyright laws...Congress should consider enabling licensing frameworks or collective rights systems for rights holders to collectively negotiate compensation from AI providers, without incurring antitrust liability. Any such legislation, however, should not address when or whether such licensing is required. So, a nod to the rights of content creators, but don't get in the way of Big Tech as it marauds its way through the sum of human knowledge in pursuit of AI firms share prices. As I read the Framework proposals in their current form, two thoughts spring to mind. One is that the money being spent on lobbyists in Washington looks increasingly like money well-spent, even if it is rising year-on-year more than most of their prospects of turning an actual profit. For example, in 2025, OpenAI's estimated lobbying spending of $2.1 million was up 24% year-on-year, while Meta is alleged to have one lobbyist on the payroll for every six members of the US Congress. The second is the blunt assessment issued by Salesforce CEO Marc Benioff at The World Economic Forum in Davos in January on the topic of regulation: These US tech companies, they hate regulation...They hate regulation. We have been warned. If enacted in law, this proposed framework would put the US on a collision course with the EU, although in the current political climate that will be the least of Washington's concerns. But it will create competing global regulatory regimes for US firms to navigate - and budget for - as they expand outside of the domestic market, which in turn isn't actually terribly supportive of the stated US-first dominance goal. While the six principles espoused will allow enough ground cover for some political grandstanding and moral posturing, in practice it's a shameless abdication of societal responsibility to sections of Big Tech that have shown little sign that they can be trusted to self-govern in the best interests of all. The lines in the sand have been drawn. What happens next in terms of Congressional action will depend a lot on the outcomes of the mid-term elections later in the year. Watch those Big Tech lobbying budgets break the bank!
[20]
White House releases AI policy framework focused on state regulations, power generation - SiliconANGLE
White House releases AI policy framework focused on state regulations, power generation The White House today released a policy document with suggestions on how Congress should regulate artificial intelligence. The move is not unexpected. In December, U.S. President Donald Trump signed an executive order that instructed White House officials to craft a national AI policy framework. The directive previewed many of the suggestions included in today's document. The framework consists of more than two dozen recommendations organized into 7 sections. One of the sections that has drawn the most attention calls on Congress to limit states' ability to regulate AI. Last year, an attempt to include such a rule in the National Defense Authorization Act failed following broad bipartisan pushback. "Congress should preempt state AI laws that impose undue burdens to ensure a minimally burdensome national standard consistent with these recommendations, not fifty discordant ones," reads the White House's policy framework. Another section of the document focuses on data centers. The White House is asking Congress to streamline the federal permitting process for AI infrastructure projects. The document places particular emphasis on so-called behind-the-meter installations, which are power generation systems co-located with data centers. Several major data center operators are investing in co-located energy infrastructure. One of them is Google LLC, which recently announced plans to build a cloud campus in Texas with on-site clean power generation systems. The search giant is building the facility in partnership with a utility called AES Corp. that will co-own and co-operate the systems. A third set of recommendations in the AI policy framework focuses on child safety. The White House is calling on Congress to mandate that AI providers integrate parental controls into their software. According to the document, such controls should enable parents to set screen time limits and manage privacy settings. The document goes on to suggest that lawmakers restrict what kind of data AI providers can use to train models. At the same time, the White House argues that Congress should limit tech firms' liability for AI-related risks. "Congress should avoid setting ambiguous standards about permissible content, or open-ended liability, that could give rise to excessive litigation," reads the document. The other recommendations in the framework cover more than a half dozen topics. Several of the suggested policies are meant to simplify private sector companies' AI development efforts. According to the White House, Congress should authorize federal agencies to make internal datasets available to model developers. The document also argues for the creation of "regulatory sandboxes for AI applications that help unleash American ingenuity and further American leadership in AI development and deployment."
[21]
Trump White House Proposes National AI Framework, Urges Federal Standard
The legislative recommendations highlight six policy areas, including copyright, energy and workforce development, while signaling a lighter regulatory stance. The Trump administration has released a national AI legislative framework for the United States, calling on Congress to establish a unified federal framework and warning that a patchwork of state laws could hinder innovation and competitiveness. The framework is structured around six core policy areas: protecting children and empowering parents, strengthening communities, intellectual property and creator rights, free speech protections, accelerating AI innovation and workforce development. At the center of the proposal is a push for a unified federal approach, with the administration urging Congress to preempt state-level AI laws it says could burden developers. "Congress should preempt state AI laws that impose undue burdens," the framework states, warning that "a patchwork of conflicting state laws would undermine American innovation and our ability to lead in the global AI race." The framework also calls for fewer barriers to AI deployment, regulatory sandboxes and expanded access to federal datasets, while opposing the creation of a new dedicated AI regulator. On intellectual property, the proposal states: Although the Administration believes that training of AI models on copyrighted material does not violate copyright laws, it acknowledges arguments to the contrary exist and therefore supports allowing the Courts to resolve this issue. It also ties AI expansion to energy policy, urging faster permitting for data centers and support for on-site power generation, while saying residential ratepayers should not bear the cost of new infrastructure. Additional measures include tools to protect minors online, efforts to combat AI-enabled fraud and workforce training initiatives aimed at preparing workers for AI-driven shifts. The framework is nonbinding and will require Congressional action to be enacted. Related: Super Micro co-founder arrested over alleged $2.5B AI chip smuggling scheme While the White House framework emphasizes workforce development and job creation in an AI-driven economy, it does not address the risk of job displacement as adoption accelerates across industries. That shift has already become visible in the crypto sector, where companies are rapidly integrating AI across operations. Over the past two months, a growing number of fintech and crypto companies have reported layoffs. In February, Jack Dorsey's payments company Block said it would cut roughly 40% of its workforce, with the co-founder pointing to the rapid use of AI tools as a key driver behind the restructuring. More recently, blockchain data provider Messari announced layoffs alongside a leadership change, as the company pivots toward an AI-first strategy following an earlier round of cuts in 2025. The trend continued this week, with Crypto.com saying it plans to cut up to 12% of its workforce as it integrates AI across its operations. On Thursday, CEO Kris Marszalek warned on X that "companies that do not make this pivot immediately will fail." Volatility in the crypto market has also led to staff reductions. On Wednesday, the Algorand Foundation said it would cut about 25% of its workforce, citing broader market downturns and macroeconomic uncertainty.
[22]
White House unveils AI policy wishlist for Congress
The White House released its policy recommendations for artificial intelligence on Friday, stating its framework "can succeed only" without a patchwork of conflicting state laws on the emerging technology. The blueprint for Congress is split into seven priorities, ranging from kids online safety laws to the protection of free speech and the streamlining of AI infrastructure. The four-page outline follows an executive order from President Trump last December seeking to limit states' abilities to regulate AI and push forward efforts to regulate at the federal level. The recommendations will be sent to Congress, which has spent years deadlocked on AI and kids online safety regulations amid fierce partisan and intraparty disagreements. It comes ahead of the 2026 midterms and recent polling indicates AI and data centers are expected to be a key issue for constituents. The White House acknowledged these concerns in a release Friday, writing it "recognizes that some Americans feel uncertain about how this transformative technology will affect issues they care about, like their children's wellbeing or their monthly electricity bill." The framework urges Congress to "build on" its kids online safety actions so far, like Sen. Ted Cruz's (R-Texas) Take it Down Act, which criminalized the publication of nonconsensual sexually explicit "deepfake" images and videos online. The White House said future regulations should give parents and guardians "robust tools" to manage children's online activity, along with creating "commercially reasonable [and] privacy protective" age assurance requirements -- another divisive issue on Capitol Hill. Notably, the White House recommended Congress not preempt states from enforcing their own kids online safety laws, including those related to sexual abuse material, even when created by AI. It urges Congress to guarantee ratepayers will not face increased electricity costs from new AI data center construction and operation, while also streamlining federal permitting for faster infrastructure development. As expected, it calls on Congress's federal framework to preempt state laws, an issue that has divided GOP leaders across the country. Republican lawmakers in Washington failed to include a 10-year moratorium on state AI laws in legislation in two attempts last year. The White House, and many technology companies, argue preemption will eliminate "undue burdens" on innovation and boost America's competitive standing. It also calls for the protection of intellectual property rights, a topic that has led to various lawsuits against major AI firms, along with the prevention of censorship -- a key concern for Republicans. While the framework emphasizes the urgency of such actions, its passage in Congress is likely to be an uphill battle amid the long-standing debates and slim majorities. The White House said it will be working with Congress "in the coming months" to turn the recommendations into legislation.
[23]
White House Releases AI Policy Framework for Congress, With Six Guiding Principles
WASHINGTON (AP) -- The White House on Friday released its framework for how it wishes Congress will address the issue of artificial intelligence. The legislative blueprint, released on its website, outlines a half-dozen guiding principles for lawmakers to keep in mind when developing policies governing artificial intelligence. Those areas include: protecting children and empowering parents; safeguarding and strengthening American communities; respecting intellectual property rights, preventing censorship and protecting free speech, enabling innovation and ensuring American AI dominance, and educating Americans and developing an AI-ready workforce. "The Trump Administration is committed to winning the AI race to usher in a new era of human flourishing, economic competitiveness, and national security for the American people," the White House said in announcing its framework. "Achieving these goals requires a commonsense national policy framework that both enables American industry to innovate and thrive and ensures that all Americans benefit from this technological revolution." The White House said "strong federal leadership" is needed to make sure the public can trust how artificial intelligence is being used in their lives. Members of Congress from both parties, as well as civil liberties and consumer rights groups, have pushed for more regulations on AI, saying there is not enough oversight for the powerful technology. President Donald Trump signed an executive order in December to block states from crafting their own regulations, arguing that a patchwork of rules would hurt growth in the sector.
[24]
Is Trump's New AI Framework a Bid to Consolidate Power?
The Latest Weapon in the Iran War Is AI-Generated Misinformation On Friday, the Trump administration released its recommendations for Congress on a national policy regarding artificial intelligence. The four-page bulleted document outlines general ideas for a legislative framework. While on the surface they seem to be vague calls for safety and free speech, some AI ethics experts are crying foul. The guidelines outline some protections for the public, while allowing AI companies to ramp up innovation without the "burden" of strict guardrails. The six objectives call for child safety requirements while also talking about how residents shouldn't pay increased electricity for data center buildout, how the country can develop an AI-friendly workforce, and what state versus federal regulation on technology should look like. The framework says that Congress should make sure that state laws "do not govern areas better suited to the Federal Government or act contrary to the United States' national strategy to achieve global AI dominance." Overall, the framework recommends a light touch on AI regulation. Critics see the guidelines as a way Trump is trying to both protect Big Tech and gain control over which tech companies are targeted and censored. A notable point of contention in the proposed framework calls on lawmakers to "preempt state laws that impose undue burdens" and "prevent a fragmented patchwork of state regulations." President Donald Trump has tried to stifle efforts by states to regulate AI in the past, most recently with an executive order in December, saying that state legislation was too "cumbersome" and was not allowing companies to innovate. "This roadmap is a poison pill for states' rights," says Rumman Chowdhury, a former U.S. science envoy for AI. "By dictating congressional behavior and again targeting state-level regulation, Trump is expanding presidential authority further." In one section, the framework says that "states should not be permitted to penalize AI developers for a third party's unlawful conduct involving their models." This is a red flag for critics, who say this could result in a way to shield AI companies from being held liable for harms. "At a moment when a clear majority of Americans -- across party lines -- is asking for stronger guardrails on AI, this framework moves in the opposite direction, proposing to limit the ability of parents, consumers, and communities to hold technology companies accountable for the risks and harms their products cause," says Alondra Nelson, who previously led the Biden administration's Office of Science and Technology Policy. The central criticism against the framework is Trump's effort to suppress state power when it comes to regulating AI technology by asking for new legislation that would limit the reach of current and future law. "There is deep irony in this," says Nelson. "States have been acting as the laboratories of democracy they have always been, responding to real harms reported by their constituents, from algorithmic discrimination in hiring and lending to the exploitation of children by AI-powered platforms." The government claims that a patchwork of state laws creates confusion for business and ignores the global nature of AI development, an argument which one AI policy expert called "weak." "There are many cases where we have states making their own laws -- education, insurance, drug laws, and even reproductive care now -- and companies seem to manage just fine," says an AI policy expert who asked to remain anonymous because they hadn't received permission from their employer to speak to the press. The expert said states can move quickly, focus on what's important, and borrow and learn from each other when innovating on AI regulation. "In this very framework there are numerous other places -- child safety, state use of AI, law enforcement use of AI -- where the administration allows states to go their own way," they point out. And while the first part of the proposal focuses on "protecting children and empowering parents," critics say its recommendations aren't specifically geared to holding AI companies accountable for protecting children. "It doesn't include reference to stronger proposals such as removing liability shields for AI companies when their products lead to harm to minors," says Steven Feldstein, technology researcher and author of The Rise of Digital Repression. "It looks like more of the same for this administration," Feldstein continued, summing it up as "light touch regulations on AI, keep states at bay from enacting their own rules, free up companies to innovate and trust they won't release models that will bring harm, and vague details about how this will end up coming together." Some critics who spoke with Rolling Stone said they feel like the proposal's call for federal preemption is a red herring covering the Trump administration's real goal of expanding presidential authority. "The federal government wants more centralized power over how the companies design their systems," stated one expert. Chowdhury agrees. "This AI bill should be viewed as part of his ongoing strategy to consolidate power in his presidency," she says, bringing up Trump's executive order from December, which she described as the president mandating "a list of 'onerous' state-level legislation that he means to attack." Another section of the framework that has raised alarms is onewhich addresses preventing censorship and protecting free speech. "Congress should prevent the United States government from coercing technology providers, including AI providers, to ban, compel, or alter content based on partisan or ideological agendas," reads the proposal. Critics see this language as intentionally vague, which leaves the door open for Trump to be the judge and jury of what he does or doesn't like, without any specific standard, giving him a type of invisible control over companies. As the AI policy expert who asked to remain nameless characterizes it: "By threatening those who develop AI models with vague and incoherent language about 'ideological bias' that cannot be evaluated in any meaningful way, the administration is saying, 'I, and only I, will decide what's appropriate for your models to produce, and I'll use whatever rationale I feel like to do so.'" Nelson says that acting as if AI tools and systems are completely neutral in their ideology, betrays a fundamental misunderstanding of generative AI. "Every model encodes assumptions, every tool reflects choices, and every output carries a point of view," says Nelson. "There is no neutral baseline to protect. There is only transparency about those choices, or the lack of it -- along with robust laws to ensure this." Additionally, Nelson points to a recent NBC News poll that found the majority of registered voters believe the risks of AI outweigh its benefits. "Americans are telling us, clearly and consistently, what they want: safe, ethical, and accountable AI," says Nelson. "This framework offers them something else entirely."
[25]
Trump releases AI policy to pre-empt state rules
The White House has unveiled a new artificial intelligence policy. This plan aims to create a single national framework for AI regulation. It seeks to prevent varied state laws and ensure consistent rules across the country. The policy also focuses on protecting children online and managing energy costs associated with AI. The White House released an artificial intelligence policy on Friday that aims to pre-empt state rules, ensure protections for children and shield communities from prohibitive energy costs. The Trump administration has been pushing for a single legislative framework that can be applied uniformly across the country, rather than leaving states to form their own plans. US President Donald Trump in December said he would withhold federal broadband funding from states whose laws to regulate artificial intelligence are judged by his administration to be holding back American dominance in the technology. The AI industry has been a powerful profit driver for the tech sector in recent years, propelling chipmaker Nvidia to become the world's largest company, while tech behemoths Amazon.com, Meta Platforms, Alphabet and Microsoft pour billions of dollars into the burgeoning sector. The White House said it looked forward to working with Congress to turn the framework into legislation. "We need one national AI framework, not a 50-state patchwork," Michael Kratsios, science and technology adviser to Trump, told The Daily Signal. "And I think one of the key provisions of it that will make it all work and come together is really focusing on the bipartisan consensus around protecting America's children." Protections in the White House framework include giving parents control of accounts and devices to protect their children's privacy and suggests features to combat potential sexual exploitation or self-harm. The framework calls on Congress to streamline permitting so that electricity-gobbling data centers can generate their own power on site. It wants to increase the federal government's ability to fight AI-generated scams and national security concerns. The plan calls for removing barriers to innovation, accelerating AI deployment across business sectors and making it easier to build top-grade AI systems, with a goal of ensuring global AI dominance. The framework includes provisions on intellectual property rights, preventing censorship and protecting free speech and developing an AI-proficient workforce by educating Americans.
[26]
White House Releases National AI Framework
WASHINGTON, March 20 (Reuters) - The White House released a framework on artificial intelligence on Friday that aims to ensure protections for children, communities and small businesses as part of a national plan to regulate developments in the field. The Trump administration has been pushing for a single legislative framework that can be applied uniformly across the country, rather than leaving states to form their own plans. "The administration looks forward to working with Congress in the coming months to turn this framework into legislation that the president can sign," the White House said in a statement. (Reporting by Katharine Jackson and Doina Chiacu; Editing by Katharine Jackson)
[27]
Framework for US AI legal framework finally in place
Despite the national artificial intelligence legislative framework being born a presidential directive aimed at a number of concerns, it offers a very short, and very unspecific way forward, despite a more concrete way forward in a world of unregulated AI blows across the globe. "The White House's national AI legislative framework will unleash American ingenuity to win the global AI race, delivering breakthroughs that create jobs, lower costs, and improve lives for Americans across the country," - Michael Kratsios, Assistant to the President for Science and Technology and director of the Office of Science and Technology Policy The framework points to six objectives that will create a balance between innovation, user trust and even includes censorship. One of the points is "Educating Americans" which details that "Congress should use non-regulatory methods to ensure that existing education programs and workforce training and support programs, including apprenticeships, affirmatively incorporate AI training." It is however certain that date laws will not apply, and that law making should instead work via sector-specific regulatory bodies instead of relying heavily on one regulatory body. Critics have pointed out that there is no intention of accountability in case someone gets hurt by AI technology, and others a bit more bluntly that it lacks substance. It has however, not solved one of the main flaws of AI, how and what to do with copyright issues before, during and after the use of AI.
[28]
White House Unveils National AI Policy to Sweep Aside State Regulations | PYMNTS.com
By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions. The framework aims to provide a consistent national policy, the White House said in a Friday press release. "Importantly, this framework can succeed only if it is applied uniformly across the United States," the White House said in the release. "A patchwork of conflicting state laws would undermine American innovation and our ability to lead in the global AI race." The framework's objectives include protecting children and empowering parents by providing account controls, safeguarding and strengthening communities by making it easier to secure permits for on-site power generation at data centers, and supporting creators by respecting intellectual property rights while also allowing fair use by AI. The framework also calls for protecting free speech by preventing AI systems from being used to silence political expression, enabling American dominance in AI by removing barriers to innovation, and developing an AI-ready workforce by expanding workforce development and skills training programs. "The Administration looks forward to working with Congress in the coming months to turn this framework into legislation that the President can sign," the White House said in the release. President Donald Trump signed an executive order in December 2025 that directed the federal government to establish a new national approach to AI and to push back against state-by-state AI rules the administration said are slowing AI innovation. It was reported Tuesday (March 17) that AI companies and investors have clamored for Congress to enact a single, federal standard that would override the growing patchwork of often conflicting state AI regulations. Currently, around 20 states have passed comprehensive privacy laws covering AI and several others have passed more limited measures. It was reported in February that Utah had become the latest flashpoint between states and the White House over AI regulations. As a bill regulating AI was being considered by the state's legislature, the White House Office of Intergovernmental Affairs sent a letter to a state senator saying that the bill is "unfixable" and "goes against the Administration's AI Agenda."
[29]
Trump administration calls on Congress to pass AI legislation By Investing.com
Investing.com -- The Trump Administration released a national legislative framework on Friday aimed at addressing artificial intelligence policy, calling on Congress to pass comprehensive legislation covering six key areas of AI development and deployment. The framework focuses on protecting children through parental controls and privacy features, with requirements for AI platforms accessible to minors to implement safeguards against sexual exploitation and self-harm content. On energy infrastructure, the administration proposed that Congress streamline permitting to allow data centers to generate power on site rather than relying on the electrical grid. The framework states that ratepayers should not bear costs associated with data center operations. The proposal addresses intellectual property rights by seeking to balance protections for creators and innovators with allowing AI systems to make fair use of materials for learning purposes. The administration outlined plans to prevent AI systems from being used for censorship or silencing political expression, stating that AI development should pursue truth and accuracy without government restrictions on content. The framework calls for removing regulatory barriers to AI innovation and expanding access to testing environments for AI system development and deployment across industry sectors. On workforce development, the administration proposed expanding training programs to prepare American workers for jobs in AI-related fields. The White House emphasized that the framework requires uniform application across all states, warning that conflicting state laws would undermine American innovation and competitiveness in AI development. The administration stated it will work with Congress in coming months to convert the framework into legislation for presidential signature. This article was generated with the support of AI and reviewed by an editor. For more information see our T&C.
[30]
Trump administration proposes national framework to oversee artificial intelligence
The White House wants to establish unified federal regulation for artificial intelligence to curb state-level initiatives and prevent legal fragmentation. Donald Trump's administration has unveiled a draft legislative framework aimed at establishing a coherent national policy for artificial intelligence. Structured around six key pillars, the plan includes measures for system security, child protection, energy management and data center regulation. It also calls on Congress to address sensitive issues such as intellectual property and the use of AI in political discourse. The stated objective is to limit the proliferation of local regulations, as several states, including New York and California, have launched their own initiatives. The administration believes such fragmentation could stifle innovation and weaken the competitiveness of American companies against international rivals. Consequently, the proposal suggests that Congress preempt certain state laws to impose a uniform federal framework. The White House hopes for a swift adoption of this framework, with the ambition of seeing it passed in the coming months. Michael Kratsios, head of science policy, mentioned the possibility of bipartisan support despite current political divisions. The project intends to balance technological development with oversight, highlighting the economic benefits of AI while addressing concerns related to consumer and worker protection.
[31]
White House releases national AI framework
WASHINGTON, March 20 (Reuters) - The White House released a framework on artificial intelligence on Friday that aims to ensure protections for children, communities and small businesses as part of a national plan to regulate developments in the field. The Trump administration has been pushing for a single legislative framework that can be applied uniformly across the country, rather than leaving states to form their own plans. "The administration looks forward to working with Congress in the coming months to turn this framework into legislation that the president can sign," the White House said in a statement. (Reporting by Katharine Jackson and Doina Chiacu; Editing by Katharine Jackson)
Share
Share
Copy Link
The Trump administration released a legislative framework for AI regulation that would centralize power in Washington by preempting state AI laws. The plan prioritizes innovation over strict oversight, places significant responsibility on parents for child safety, and avoids creating new federal regulatory bodies. Critics argue it provides inadequate safeguards while limiting states' ability to act as early regulators of emerging risks.
The Trump administration released a legislative framework for AI regulation on Friday that would establish federal oversight of AI while preempting state laws across the United States
1
.
Source: ET
The Trump AI framework outlines seven key objectives that prioritize AI development and innovation, proposing a centralized federal approach that would override stricter state-level regulations
1
. "This framework can only succeed if it is applied uniformly across the United States," reads a White House statement, warning that "a patchwork of conflicting state laws would undermine American innovation and our ability to lead in the global AI race"1
.The proposal comes three months after Trump signed an executive order directing federal agencies to challenge state AI laws, giving the Commerce Department 90 days to compile a list of "onerous" state regulations
1
. Sen. Marsha Blackburn introduced the policy package, called The Trump America AI Act, in Congress on Thursday, attempting to codify the vision based on Trump's 2025 AI Action Plan2
. The framework proposes a "minimally burdensome national standard," echoing the administration's broader push to "remove outdated or unnecessary barriers to innovation"1
.
Source: Cointelegraph
The AI policy framework places significant responsibility on parents rather than tech companies for safeguards for minors. "Parents are best equipped to manage their children's digital environment and upbringing," the framework reads, calling on Congress to give parents tools like account controls to protect their children's privacy and manage device use
1
. The plan mandates Congress to "empower parents and guardians with robust tools to manage their children's privacy settings, screen time, content exposure and account controls"5
.The framework is pro-age verification for AI, suggesting that Congress "establish commercially reasonable, privacy protective, age assurance requirements (such as parental attestation) for AI platforms and services likely to be accessed by minors"
3
. While it calls on AI companies to implement features that "reduce the risks of sexual exploitation and harm to minors," it does not lay out any clear, enforceable requirements1
. The plan highlights protecting kids from AI-powered deepfakes, particularly concerning AI creating child sexual abuse material2
.
Source: Reuters
The framework explicitly warns against creating "any new federal rulemaking body to regulate AI," instead suggesting existing government watchdogs should oversee the sector "through industry-led standards"
5
. This light-touch regulation approach, championed by White House AI czar and venture capitalist David Sacks, focuses less on guardrails and more on promoting companies' growth1
. Missing from the framework are any gestures towards liability frameworks, independent oversight, or enforcement mechanisms for potential novel harms caused by AI1
."It is light on protection and heavy on promotion of dangerous AI systems," said Alan Butler, president and executive director of the Electronic Privacy Information Center
2
. Brendan Steinhauser, CEO of The Alliance for Secure AI, criticized the approach: "White House AI czar David Sacks continues to do the bidding of Big Tech at the expense of regular, hardworking Americans. This federal AI framework seeks to prevent states from legislating on AI and provides no path to accountability for AI developers for the harms caused by their products"1
.While the framework nods to federalism, the carve-outs for states are relatively narrow, preserving only their authority over general laws like fraud and child protection, zoning, and state use of AI
1
. The plan draws a hard line against states regulating AI development itself, arguing it is an "inherently interstate phenomenon with key foreign policy and national security implications"3
. The framework also seeks to prevent states from "penalizing AI developers for a third party's unlawful conduct involving their models" -- a key liability shield for developers1
.The unexpected move by the White House comes amid a fierce Maga backlash against the technology from within Trump's own coalition
5
. The president has struggled to convince large parts of his own party to fall in line, with the administration twice trying to legislate to ban state-level AI regulations, but the measures failed amid opposition from Republican senators and governors5
. A series of recent polls has shown that concern about data centers and the societal impact of AI is widespread among Trump voters5
.Related Stories
The framework discourages Congress from taking up AI copyright issues, stating: "Although the Administration believes that training of AI models on copyrighted material does not violate copyright laws, it acknowledges arguments to the contrary exist and therefore supports allowing the Courts to resolve this issue"
3
. The plan reiterates the administration's position that AI companies are covered by fair use -- meaning they wouldn't have to obtain permission or pay copyright holders for copyrighted content when creating their models2
.In line with Trump's previous AI Action Plan, the framework calls for states and local governments to streamline data center construction and operation
2
. The plan also calls for protecting consumers from electricity price spikes, as Trump has pushed tech companies including Amazon, Meta, Microsoft and Google to ensure corporations cover the cost of power they use for AI initiatives4
. The framework also addresses censorship, though it's limited to preventing AI companies from including ideological or partisan bias in their products2
.Many in the AI industry are celebrating this direction because it gives them broader liberties to innovate without the threat of regulation. "This framework is exactly what startups have been asking for: a clear national standard so they can build fast and scale," Teresa Carlson, president of General Catalyst Institute, told TechCrunch
1
. However, critics argue that states have been quicker to pass laws around emerging risks, with New York's RAISE Act and California's SB-53 seeking to ensure large AI companies have and adhere to safety protocols that are publicly documented1
.Mackenzie Arnold, director of US policy at the Institute for Law & AI, said the framework was "clearer on what it doesn't want than on what it does," expressing concern that "the framework continues to treat governance and innovation as competing aims"
5
. It's unclear whether the White House proposal will muster enough support on Capitol Hill, where mandates on tech companies have divided Republicans4
. The entire document and all its provisions will only take effect if Congress adopts them into legislation and passes them into law3
.Summarized by
Navi
[1]
23 Jul 2025•Policy and Regulation

14 Mar 2025•Policy and Regulation

15 Nov 2024•Policy and Regulation
