4 Sources
4 Sources
[1]
US draws up strict new AI guidelines amid Anthropic clash, FT reports
March 6 (Reuters) - The Trump administration has drawn up strict rules for civilian artificial intelligence contracts that would require AI companies to allow "any lawful" use of their models amid a stand-off between the Pentagon and Anthropic, the Financial Times reported on Friday. A draft of the new guidelines reviewed by the FT says AI groups seeking to do business with the government must grant the U.S. an irrevocable license to use their systems for all legal purposes, the report added. Reuters could not immediately verify the report. Reporting by Bipasha Dey in Bengaluru; Editing by Himani Sarkar Our Standards: The Thomson Reuters Trust Principles., opens new tab
[2]
US draws up strict new AI guidelines amid Anthropic clash: Report
The Trump administration is proposing strict rules for civilian AI contracts, requiring companies to permit "any lawful" use of their models. This comes after the Pentagon designated Anthropic a "supply-chain risk," barring its technology from military work due to disputes over safeguards. The proposed guidelines also mandate AI systems avoid partisan judgments and disclose foreign compliance modifications. The Trump administration has drawn up strict rules for civilian artificial intelligence contracts that would require AI companies to allow "any lawful" use of their models amid a stand-off between the Pentagon and Anthropic, the Financial Times reported on Friday. The report comes a day after the Pentagon formally designated Anthropic a "supply-chain risk" and barred government contractors from using the AI firm's technology in work for the U.S. military. That move followed a months-long dispute over the company's insistence on safeguards that the Defense Department says went too far. A draft of the guidelines reviewed by the FT says AI groups seeking business with the government must grant the U.S. an irrevocable license to use their systems for all legal purposes. The guidance from the U.S. General Services Administration (GSA) would apply to civilian contracts and is part of a broader government-wide effort to strengthen AI services procurement, the FT reported, adding that it mirrors measures the Pentagon is considering for military contracts. The White House and the GSA did not immediately respond to requests for comment. The draft from the GSA also mandates that contractors "must not intentionally encode partisan or ideological judgments into the AI systems data outputs," the FT reported. It also requires companies to disclose whether their models have been "modified or configured to comply with any non-U.S. federal government or commercial compliance or regulatory framework," the FT reported.
[3]
White House eyes 'any lawful use' mandate for AI firms in new draft By Investing.com
Investing.com -- The Trump administration has drafted aggressive new guidelines for civilian artificial intelligence contracts, requiring companies to permit "any lawful" use of their models by the U.S. government, according to the Financial Times. The move follows a high-stakes standoff between the Department of War and Anthropic, which culminated Thursday in the Pentagon designating the AI firm a "supply-chain risk." Market sentiment in the tech sector remained cautious following the report. Shares of AI-adjacent firms wavered as investors assessed the potential for a broader regulatory crackdown on "ethical" guardrails that conflict with national security objectives. The Nasdaq 100 had slipped 1.51% by 15:59 ET (20:59 GMT), while Microsoft Corporation (NASDAQ:MSFT) fell 0.42% and Alphabet Inc Class A (NASDAQ:GOOGL) was down 0.78% as the industry weighed the implications of the U.S. General Services Administration (GSA's) "irrevocable license" requirement. Anthropic designated a 'supply-chain risk' The new General Services Administration (GSA) rules portray a hardening stance within the military establishment. The Pentagon's designation of Anthropic as a "supply chain risk", a label typically reserved for foreign entities like Huawei, effectively bars any government contractor from utilizing the firm's technology. The "blacklisting" stems from a months-long dispute over Anthropic's refusal to waive safeguards against mass domestic surveillance and the use of its Claude models in lethal autonomous weapons. Secretary of War Pete Hegseth defended the move on Friday, stating that the U.S. requires "patriotic" technology partners who do not impose restrictive "red lines" on lawful operations. Anthropic has argued that the designation is legally unsound and plans to challenge the decision in court. The Trump administration has granted a six-month transition period for agencies to migrate away from Anthropic's systems. Ideological neutrality and disclosure mandates The newly published GSA draft mandates that AI contractors must not "intentionally encode partisan or ideological judgments" into their data outputs. The rule aims to strip what the administration describes as "embedded bias" or "wokeness" from government-funded models. Companies must now disclose if their models have been modified to comply with non-U.S. regulatory frameworks, such as the European Union's AI Act. Analysts at Evercore ISI noted that these requirements could force a "decoupling" of the American AI stack from global standards. Traders are now watching closely for a response from OpenAI, which reportedly stepped in to fill the Pentagon's vacuum shortly after Anthropic's exile.
[4]
US draws up strict new AI guidelines amid Anthropic clash, FT reports
March 6 (Reuters) - The Trump administration has drawn up strict rules for civilian artificial intelligence contracts that would require AI companies to allow "any lawful" use of their models amid a stand-off between the Pentagon and Anthropic, the Financial Times reported on Friday. A draft of the new guidelines reviewed by the FT says AI groups seeking to do business with the government must grant the U.S. an irrevocable license to use their systems for all legal purposes, the report added. Reuters could not immediately verify the report. (Reporting by Bipasha Dey in Bengaluru; Editing by Himani Sarkar)
Share
Share
Copy Link
The Trump administration has drafted aggressive new AI guidelines requiring companies to permit any lawful use of their models by the U.S. government. The move follows the Pentagon's formal designation of Anthropic as a supply-chain risk, barring government contractors from using the AI firm's technology in military work after months of disputes over safeguards against surveillance and autonomous weapons.
The Trump administration has drafted strict new AI guidelines that would fundamentally reshape how AI companies do business with the U.S. government. According to a Financial Times report, the draft rules for civilian artificial intelligence contracts require AI companies to allow "any lawful use" of their models, granting the government an irrevocable license to deploy these systems for all legal purposes
1
. The guidance from the U.S. General Services Administration would apply to civilian contracts and represents part of a broader government-wide effort to strengthen AI services procurement, with similar measures under consideration for military contracts2
.The proposed AI guidelines emerge against the backdrop of an escalating standoff between the Pentagon and Anthropic. The Department of Defense formally designated Anthropic a "supply-chain risk" on Thursday, effectively barring government contractors from using the AI firm's technology in work for the U.S. military
2
. This blacklisting, a label typically reserved for foreign entities like Huawei, stems from a months-long dispute over Anthropic's insistence on safeguards that the Defense Department deemed excessive. Specifically, Anthropic refused to waive protections against mass domestic surveillance and the use of its Claude models in lethal autonomous weapons3
. Secretary of War Pete Hegseth defended the decision, stating the U.S. requires "patriotic" technology partners who do not impose restrictive "red lines" on lawful operations. Anthropic has indicated it plans to challenge the designation in court, arguing it is legally unsound. The Trump administration granted a six-month transition period for agencies to migrate away from Anthropic's systems.
Source: Reuters
Beyond the any lawful use requirement, the draft AI guidelines include additional provisions aimed at ensuring ideological neutrality in government AI systems. The GSA draft mandates that contractors "must not intentionally encode partisan or ideological judgments into the AI systems data outputs," targeting what the administration describes as "embedded bias" or "wokeness" in government-funded models
2
. Companies must also disclose whether their models have been modified or configured to comply with non-U.S. regulations, such as the European Union's AI Act2
. Analysts at Evercore ISI suggest these compliance modifications requirements could force a "decoupling" of the American AI stack from global standards3
.
Source: ET
Related Stories
Market sentiment in the tech sector turned cautious following the Reuters and Financial Times reports. The Nasdaq 100 slipped 1.51% by 15:59 ET, while Microsoft Corporation fell 0.42% and Alphabet Inc Class A dropped 0.78% as investors assessed the potential for a broader regulatory crackdown on ethical guardrails that conflict with national security objectives
3
. Traders are now watching closely for responses from other major AI companies, particularly OpenAI, which reportedly stepped in to fill the Pentagon's vacuum shortly after Anthropic's exile3
. The new rules signal a hardening stance within the military establishment and could fundamentally alter how AI firms balance ethical considerations with government contract opportunities. For AI companies seeking federal business, these guidelines present a stark choice: accept broad government usage rights for military use and other applications, or risk exclusion from lucrative government contracts.Summarized by
Navi
[3]
[4]
30 Jan 2026•Policy and Regulation

21 Feb 2026•Policy and Regulation

02 Mar 2026•Policy and Regulation

1
Technology

2
Policy and Regulation

3
Policy and Regulation
