4 Sources
[1]
White house considers vetting AI Models before they are released, NYT reports
May 4 (Reuters) - U.S. President Donald Trump is considering the introduction of government oversight over new models of artificial intelligence, the New â York Times reported on Monday, citing officials briefed on the deliberations. The U.S. government is discussing an executive order to create â an A.I. working group that would bring together tech executives â and government officials to examine potential oversight procedures, â according to the newspaper. Reuters could â not immediately verify the report. Reporting by Chandni Shah in Bengaluru Our Standards: The Thomson Reuters Trust Principles., opens new tab
[2]
White House Considers Vetting A.I. Models Before They Are Released
Tripp Mickle and Sheera Frenkel reported from San Francisco, and Julian Barnes and Dustin Volz from Washington. President Trump, who promoted a hands-off approach to artificial intelligence and gave Silicon Valley free rein to roll out the technology, is considering the introduction of government oversight over new A.I. models, according to U.S. officials and people briefed on the deliberations. The administration is discussing an executive order to create an A.I. working group that would bring together tech executives and government officials to examine potential oversight procedures, according to U.S. officials, who declined to be identified in order to discuss deliberations over sensitive policies. Among the potential plans is a formal government review process for new A.I. models. In meetings last week, White House officials told executives from Anthropic, Google and OpenAI about some of those plans, people briefed on the conversations said. The working group is likely to consider a number of oversight approaches, officials said. But a review process could be similar to one being developed in Britain, which has assigned several government bodies to ensure that A.I. models meet certain safety standards, people in the tech industry and the administration said. The discussions signal a stark reversal in the Trump administration's approach to A.I. Since returning to office last year, Mr. Trump has been a major booster of the technology, which he has said is vital to winning the geopolitical contest against China. Among other moves, he swiftly rolled back a Biden administration regulatory process that asked A.I. developers to perform safety evaluations and report on A.I. models with potential military applications. "We're going to make this industry absolutely the top, because right now it's a beautiful baby that's born," Mr. Trump said of A.I. at an event in July. "We have to grow that baby and let that baby thrive. We can't stop it. We can't stop it with politics. We can't stop it with foolish rules and even stupid rules." Mr. Trump left room for some rules, but he added that "they have to be more brilliant than even the technology itself." That noninterventionist policy began changing last month after the start-up Anthropic announced a new A.I. model called Mythos. Mythos is so powerful at identifying security vulnerabilities in software that it could lead to a cybersecurity "reckoning," said Anthropic, which declined to release the model to the public. The White House wants to avoid any political repercussions if a devastating A.I.-enabled cyberattack were to occur, people in the tech industry and the administration said. The administration is also evaluating whether new A.I. models could yield cyber-capabilities that could be useful to the Pentagon and U.S. intelligence agencies, they said. To get ahead of models like Mythos, some officials are pushing for a review system that would give the government first access to A.I. models, but that would not block their release, people briefed on the talks said. The shift on A.I. has sowed confusion. As conversations between the White House and tech companies continue, some executives have argued that too much government oversight will slow down U.S. innovation against China, the people briefed on the discussions said. But the companies also do not agree on how the United States should move forward with potential regulation. "The technology is moving extremely fast, and there are few formal procedures, but they also don't want to overregulate," said Dean Ball, who was a senior adviser on A.I. in the Trump administration before leaving last year for the Foundation for American Innovation. "It's a tricky balance." A White House official said that discussions of any potential executive order were "speculation" and that Mr. Trump would make any policy announcement himself. The changing policy on A.I. coincides with a leadership change at the White House. In March, David Sacks, the White House A.I. czar who had spearheaded the administration's deregulation efforts, said he was leaving the role. Susie Wiles, the White House chief of staff, and Treasury Secretary Scott Bessent have stepped in to fill Mr. Sacks's position, some of the people said. Ms. Wiles and Mr. Bessent have told people outside the administration that they plan to have a bigger hand in shaping A.I. policy. But Ms. Wiles's and Mr. Bessent's plans have been complicated by a bitter dispute between the Pentagon and Anthropic. This year, the start-up and the Pentagon became embroiled in a fight over a $200 million contract and how the military should use A.I. in warfare. When the two sides failed to agree on terms, the Pentagon cut off the government's use of Anthropic's technology in March. Anthropic has since sued the government. The conflict has made it difficult for some government agencies, which had come to rely on Anthropic's technology, according to military, intelligence and other U.S. officials. Anthropic's A.I. is still being used by the military in a system known as Maven, which helps analyze intelligence and suggests targets for airstrikes in the war in Iran. The National Security Agency has also recently used Anthropic's Mythos model to assess vulnerabilities in the U.S. government's software, people with knowledge of the work said. Last month, Ms. Wiles and Mr. Bessent held a meeting at the White House with Dario Amodei, Anthropic's chief executive, with a focus on getting the company's technology back in use by the government. Both sides later described the meeting as "productive." Officials said that if the administration moves ahead with vetting A.I. models, the working group would help determine the agencies that would help with that effort. With no federal agency responsible for all government cybersecurity work, some officials said having the N.S.A., the White House Office of the National Cyber Director and the director of national intelligence oversee the model review was the best way to proceed. The working group could also look at whether there is a role for the Center for A.I. Standards and Innovation, an agency the Biden administration established to vet A.I. models that are voluntarily shared with the government. Under Mr. Trump, the organization has been sidelined, people in the industry said, even though the White House said in an A.I. policy paper that the group should play a role in assessing "the performance and reliability of A.I. systems." Any of these moves would take the administration far from a philosophy on regulation that Vice President JD Vance outlined in a speech at an international A.I. gathering in Paris last year. At the time, he warned industry and government officials that "excessive regulation of the A.I. sector could kill a transformative industry just as it's taking off." "The A.I. future is not going to be won by hand-wringing about safety," he said. "It will be won by building." Cade Metz, Kate Conger and Tyler Pager contributed reporting.
[3]
Why some AI tools are being banned by the US government -- and what it means for you
Should AI models be vetted before release? The White House thinks it might be time The era of "move fast and break things" in AI may be coming to an abrupt halt. According to a recent New York Times report, the Trump administration is reportedly preparing a landmark Executive Order that would require Big Tech to submit their most powerful models for government vetting before they are allowed to go public. This move underscores how the rules are changing and that AI is no longer seen as a regular tech tool, but a national security asset. Here's what's behind the conversation. Why the sudden change? The catalyst for this shift appears to be the recent limited release of Anthropic's Claude Mythos. While touted as a breakthrough in cybersecurity, federal officials have raised alarms about the model's "frightening" ability to autonomously discover and exploit unpatchable software vulnerabilities in critical infrastructure. According to the report, the administration's new stance is driven by three key factors: * The 'Mythos' effect: Claims that frontier models are now skilled enough to bypass traditional cyber defenses. * Domestic compute sovereignty: A push to ensure the U.S. government has priority access to the world's most powerful processing power. * The Anthropic rift: A reported fallout between the White House and Anthropic over military usage rights, leading the administration to lean more heavily on partnerships with OpenAI and Google. Inside the discussion Last week, high-ranking White House officials reportedly met with CEOs Sundar Pichai (Google), Sam Altman (OpenAI), and Dario Amodei (Anthropic) to discuss the logistics of a government-led "working group." The goal of the discussion was said to create a standardized "red-teaming" process where federal experts audit a model's capabilities before they are ever launched. The takeaway If signed, this order could slow the breakneck pace of AI innovation in ways you'll actually notice. New "Pro" and "Ultra" model updates may take longer to arrive as they move through a rigorous vetting process, finally trading speed for added safety. Supporters say that's a win for reliability, but critics warn it could give international rivals like Deepseek an edge if they face fewer restrictions. This potential shift indicates that we may be heading toward a two-tier AI world of government-certified "safe" models for businesses and institutions and a separate, less regulated lane for hobbyists and power users. Time will tell. For now, it's a tradeoff, slower progress in exchange for tighter control. Follow Tom's Guide on Google News and add us as a preferred source to get our up-to-date news, analysis, and reviews in your feeds. Subscribe to Tom's Guide on YouTube and follow us on TikTok.
[4]
White House considers government reviews for AI models, NYT reports
WASHINGTON, May 4 (Reuters) - U.S. President Donald Trump is considering the introduction of government oversight over new models of artificial intelligence, the New York Times reported on Monday, citing officials briefed on the deliberations. The U.S. government is discussing an executive order to create an AI working group that would bring together tech executives and government officials to examine potential oversight procedures, according to the newspaper. A White House official declined to confirm or deny the report. "Any policy announcement will come directly from the president. Discussion about potential executive orders is speculation." The newspaper said the White House was considering a formal government review process for new AI models. The Times said the change could be prompted by concerns about Anthropic's new AI model called Mythos, which cybersecurity experts warn could supercharge complex cyberattacks. Its capabilities to code at a high level have given it a potentially unprecedented ability to identify cybersecurity vulnerabilities and devise ways to exploit them, experts said. The move would be a sharp reversal for Trump, who has urged a hands-off approach. Trump in July released an AI blueprint that aimed to loosen environmental rules and vastly expand AI exports to allies, in a bid to maintain the American edge over China in the critical technology. On Trump's first day in office in 2025, he revoked a 2023 executive order signed by Joe Biden that sought to reduce the risks that artificial intelligence poses to consumers, workers and national security. Biden's order required developers of AI systems that pose risks to U.S. national security, the economy, public health or safety to share the results of safety tests with the U.S. government, in line with the Defense Production Act, before they were released to the public. The White House in March unveiled an AI policy for Congress that urges lawmakers to enact legislation to pre-empt state rules, protect children ?and shield communities from high energy costs related to the burgeoning technology. (Reporting by David Shepardson and Alex Alper in Washington and Chandni Shah in Bengaluru; Editing by Sonali Paul)
Share
Copy Link
The Trump administration is reportedly preparing an executive order that would require government reviews for AI models before public release, marking a dramatic reversal from its previous hands-off approach. The shift comes after Anthropic's Mythos model raised alarm over its ability to exploit software vulnerabilities, prompting national security concerns and discussions with tech executives from OpenAI, Google, and Anthropic.
The White House is considering a significant policy reversal on artificial intelligence, with the Trump administration reportedly discussing an executive order that would introduce government oversight over new AI models before they reach the public. According to the New York Times, the administration is planning to create an AI working group that would bring together tech executives and government officials to examine potential oversight procedures, including a formal government review process for new AI models
2
. This represents a stark departure from President Donald Trump's earlier stance, which promoted a hands-off approach and gave Silicon Valley free rein to develop the technology without significant regulatory constraints1
.
Source: Tom's Guide
The catalyst for this dramatic shift appears to be Anthropic's recent announcement of Mythos, a powerful AI model that the company itself described as capable of causing a cybersecurity "reckoning." The model's advanced ability to identify software vulnerabilities and exploit them has raised alarm bells within the administration about potential cybersecurity risks to critical infrastructure
2
. Federal officials have characterized Mythos's capabilities as "frightening," particularly its ability to autonomously discover and exploit unpatchable security flaws3
. The White House wants to avoid political repercussions if a devastating AI-enabled cyberattack were to occur, prompting discussions about federal vetting before public release of frontier models2
.
Source: Reuters
Last week, White House officials met with executives from Anthropic, Google, and OpenAI to discuss the proposed oversight framework. High-ranking officials briefed CEOs including Sundar Pichai from Google, Sam Altman from OpenAI, and Dario Amodei from Anthropic on plans that could include a standardized red-teaming process where federal experts audit a model's capabilities before launch
3
. Some officials are pushing for a review system that would give the government first access to AI models, though notably this would not block their release entirely2
. The working group is likely to consider multiple oversight approaches, with one possibility being a process similar to Britain's model, which assigns government bodies to ensure AI models meet certain safety standards2
.This potential shift marks a complete reversal from the administration's earlier position. On his first day in office in 2025, Trump revoked a 2023 executive order signed by Joe Biden that required developers of AI systems posing risks to national security to share safety test results with the government before public release
4
. In July, Trump described AI as "a beautiful baby that's born" that needed to thrive without being stopped by "foolish rules and even stupid rules"2
. The administration's new stance is also driven by concerns about domestic compute sovereignty and ensuring the U.S. government has priority access to the world's most powerful processing capabilities3
.Related Stories
The changing approach to vetting AI models coincides with significant leadership changes at the White House. In March, David Sacks, the White House AI czar who spearheaded deregulation efforts, left his role. Susie Wiles, the White House chief of staff, and Treasury Secretary Scott Bessent have stepped in to fill the position, with both indicating plans to have a bigger hand in shaping AI policy
2
. However, their efforts have been complicated by a bitter dispute between the Pentagon and Anthropic over a $200 million contract regarding military use of AI in warfare. When the two sides failed to agree on terms, the Pentagon cut off government use of Anthropic's technology in March, and Anthropic subsequently sued the government2
.The shift on government reviews for AI models has created confusion and debate within the tech industry. Some executives have argued that excessive government oversight will slow U.S. innovation and regulation efforts against China, though companies themselves disagree on how the United States should move forward with potential regulation
2
. "The technology is moving extremely fast, and there are few formal procedures, but they also don't want to overregulate," said Dean Ball, a former senior adviser on AI in the Trump administration. "It's a tricky balance"2
. Critics warn that slower approval processes could give international rivals like Deepseek an edge if they face fewer restrictions, potentially creating a two-tier AI world with government-certified "safe" models for businesses and institutions, and a separate, less regulated lane for other users3
. A White House official declined to confirm the reports, stating that "any policy announcement will come directly from the president" and that "discussion about potential executive orders is speculation"4
.Summarized by
Navi
[4]
14 Mar 2025â˘Policy and Regulation

21 Jan 2025â˘Policy and Regulation

22 Mar 2025â˘Policy and Regulation

1
Entertainment and Society

2
Health

3
Technology
