4 Sources
[1]
US Commerce Department deletes Microsoft, Google, xAI security-test details
The US Commerce Department has removed from its website the details of an agreement under which Microsoft, Google, and xAI agreed to submit new AI models to government scientists for security testing before release, Reuters reported on Monday. The original page, posted on 5 May, said the three companies would hand over their frontier AI systems to the department's testing team to be reviewed for cyberattack vulnerabilities, military-misuse risk and national-security flaws before public deployment. By Monday afternoon Washington time, the link returned a "Sorry, we cannot find that page" error message; it was subsequently redirected to the website of the Center for AI Standards and Innovation, the government body that runs the tests. The Center, the successor body to the US AI Safety Institute (AISI), is housed within the National Institute of Standards and Technology (NIST), part of the Department of Commerce. The renaming and refocusing followed an executive order that scaled back the previous administration's AI-safety architecture and reframed the institute's mission around standards and industry coordination rather than safety evaluation. Reuters reported that neither the Commerce Department nor the Trump White House responded immediately to requests for comment on why the page was deleted. There is no public statement from Microsoft, Google, or xAI on the change. The May 5 announcement had been read at the time as a notable commitment by the three companies to pre-deployment government review, and as a sign of growing federal concern about national-security risks posed by powerful AI systems. The deal followed the Trump administration's removal of Anthropic from a Pentagon AI contract over alleged safety-related constraints; Anthropic was not named as a participant in the Commerce Department testing programme. The deletion does not necessarily mean the programme has been cancelled. The Center for AI Standards and Innovation continues to operate, and the redirected webpage suggests the relationship between the agencies and the three companies remains in place at an operational level. Several federal officials have, however, publicly questioned the wisdom of giving the government access to frontier AI models pre-release, because such access could become a target for nation-state cyber-espionage. The story matters most as a signal. The Commerce Department's willingness to remove a positive AI-safety announcement from its public-facing website, without explanation, will be read by both critics and supporters of US AI policy as evidence of internal disagreement about how the government should engage with frontier AI labs. Industry observers had treated the original 5 May announcement as a stable element of the new administration's AI-policy posture. Microsoft, Google and xAI did not respond to Reuters' requests for comment. Anthropic, OpenAI, Meta and other large model providers were not part of the original announcement and have not commented on the deletion. The Center for AI Standards and Innovation's website, where the redirect now points, contains general information on its programme but does not currently include the specifics of the pre-release testing arrangement that were on the deleted Commerce page.
[2]
The Government’s Page About Its AI Vetting Deals with Google, xAI, and Microsoft Is Missing from Its Website
About a week ago, the Commerce Department's Center for AI Standards and Innovation (CAISI) announced a deal with the AI companies Microsoft, xAI, and Google that allowed the government to inspect unreleased AI models before they're released to the general public. Anthropic and OpenAI signed something similar way back in 2024. Here's a long excerpt from the government's announcement, dated May 5, 2026: "Today, the Center for AI Standards and Innovation (CAISI) at the Department of Commerce’s National Institute of Standards and Technology announced new agreements with Google DeepMind, Microsoft and xAI. Through these expanded industry collaborations, CAISI will conduct pre-deployment evaluations and targeted research to better assess frontier AI capabilities and advance the state of AI security. These agreements build on previously announced partnerships, which have been renegotiated to reflect CAISI’s directives from the secretary of commerce and America’s AI Action Plan." But that excerpt had to be pulled from the Wayback Machine because that announcement is currently missing from the CAISI website. Reuters seems to have been the first to notice this, writing on Monday afternoon that using the original url revolved to an error page that said "Sorry, we cannot find that page," and then later, redirected to the main CAISI page on the Commerce Department website. As of this writing on Monday night, the url is still a redirect to the CAISI page. "These agreements support information-sharing," the archived announcement says, along with "ensuring a clear understanding in government of AI capabilities and the state of international AI competition." Gizmodo requested comment from the White House and Commerce Department on Monday evening, but did not immediately hear back. We will update this article if we receive a reply.
[3]
Commerce Department removes AI testing agreement details from website By Investing.com
Investing.com -- The U.S. Commerce Department removed details from its website about its agreement with Google, xAI and Microsoft to test their artificial intelligence models for security vulnerabilities. The link that previously led to the department's announcement about the testing is no longer available, according to Reuters. As of Monday afternoon in Washington, it said, "Sorry, we cannot find that page." The link later redirected to the Center for AI Standards and Innovation's website, the government organization responsible for the tests. The Commerce Department announced on Tuesday that the companies would hand over new AI models before they deploy them to the public, allowing government scientists to test them for security flaws. Concern is growing in the U.S. government over the national security risks posed by powerful AI systems, including Anthropic's Mythos. By securing early access to advanced models, U.S. officials said they were aiming to identify threats ranging from cyberattacks to military misuse. It was not immediately clear why the website was deleted. This article was generated with the support of AI and reviewed by an editor. For more information see our T&C.
[4]
Microsoft, Google, xAI security test details deleted from US government website
WASHINGTON, May 11 (Reuters) - The U.S. Commerce Department removed details from its website about its agreement with Google, xAI and Microsoft to test their artificial intelligence models for security vulnerabilities, according to a Reuters review of the agency's site. The link that previously led to the department's announcement about the testing is no longer available. As of Monday afternoon in Washington, it said, "Sorry, we cannot find that page." The link later redirected to the Center for AI Standards and Innovation's website, the government organization responsible for the tests. The Commerce Department announced on May 5 that the companies would hand over new AI models before they deploy them to the public, allowing government scientists to test them for security flaws. Concern is growing in the U.S. government over the national security risks posed by powerful AI systems, including Anthropic's Mythos. By securing early access to advanced models, U.S. officials said they were aiming to identify threats ranging from cyberattacks to military misuse. It was not immediately clear why the website was deleted. Spokespeople for the Commerce Department and Trump White House did not immediately respond to a request for comment from Reuters. (Reporting by Courtney RozenEditing by Nick Zieminski)
Share
Copy Link
The US Commerce Department has deleted from its website the details of a May 5 agreement requiring Google, Microsoft, and xAI to submit new AI models for government security testing before public release. The page now returns an error message, redirecting to the Center for AI Standards and Innovation's site without explanation, signaling potential internal disagreement about how the government should engage with frontier AI labs.
The US Commerce Department has quietly removed from its website the specifics of a recently announced agreement with Google, Microsoft, and xAI that required the companies to submit their AI models for government security testing before public deployment
1
. The original page, posted on May 5, outlined how the three tech giants would hand over their frontier AI systems to the department's testing team for evaluation of cyberattack vulnerabilities, military-misuse risks, and national security flaws4
. By Monday afternoon Washington time, the link returned a "Sorry, we cannot find that page" error message before being redirected to the Center for AI Standards and Innovation website2
.Source: Market Screener
Neither the US Commerce Department nor the Trump White House provided immediate comment on why the security test details deleted from the site, and there has been no public statement from Microsoft, Google, or xAI regarding the change
1
. The May 5 announcement had been viewed as a significant commitment to pre-deployment government review and reflected growing federal concern about national security risks posed by powerful AI systems3
.The deletion does not necessarily mean the programme has been cancelled entirely. The Center for AI Standards and Innovation, which houses the testing operation within the National Institute of Standards and Technology, continues to operate, and the redirected webpage suggests the relationship between the agencies and the three companies may remain in place at an operational level
1
. However, the Center's successor body to the US AI Safety Institute has undergone significant changes following an executive order that scaled back the previous administration's AI-safety architecture and reframed the institute's mission around standards and industry coordination rather than safety evaluation1
.The agreement with Google, xAI, and Microsoft was part of expanded industry collaborations announced to conduct pre-deployment evaluations and targeted research to better assess frontier AI capabilities and advance AI security
2
. These agreements built on previously announced partnerships, including those with Anthropic and OpenAI signed in 2024, which had been renegotiated to reflect the Center for AI Standards and Innovation's directives2
.Related Stories
The Commerce Department's willingness to remove a positive announcement about the pre-release review of AI models from its public-facing website without explanation signals potential internal disagreement about how the government should engage with frontier AI labs
1
. Industry observers had treated the original May 5 announcement as a stable element of the new administration's AI policy posture, making the deletion particularly notable1
.Several federal officials have publicly questioned the wisdom of giving government scientists access to frontier AI systems before release, arguing that such access could become a target for nation-state cyber-espionage
1
. This concern highlights the delicate balance between ensuring national security through early identification of threats ranging from cyberattacks to military misuse, and protecting sensitive AI technology from potential foreign adversaries4
.The deal had followed the Trump administration's removal of Anthropic from a Pentagon AI contract over alleged safety-related constraints, though Anthropic was not named as a participant in the Commerce Department testing programme
1
. Other major AI model providers including OpenAI and Meta were not part of the original announcement and have not commented on the deletion1
.Summarized by
Navi
[3]
06 Mar 2025•Policy and Regulation

04 May 2026•Policy and Regulation

11 Jun 2025•Technology

1
Technology

2
Policy and Regulation

3
Policy and Regulation
