2 Sources
2 Sources
[1]
White House officials reportedly frustrated by Anthropic's law enforcement AI limits
Anthropic's AI models could potentially help spies analyze classified documents, but the company draws the line at domestic surveillance. That restriction is reportedly making the Trump administration angry. On Tuesday, Semafor reported that Anthropic faces growing hostility from the Trump administration over the AI company's restrictions on law enforcement uses of its Claude models. Two senior White House officials told the outlet that federal contractors working with agencies like the FBI and Secret Service have run into roadblocks when attempting to use Claude for surveillance tasks. The friction stems from Anthropic's usage policies that prohibit domestic surveillance applications. The officials, who spoke to Semafor anonymously, said they worry that Anthropic enforces its policies selectively based on politics and uses vague terminology that allows for a broad interpretation of its rules. The restrictions affect private contractors working with law enforcement agencies who need AI models for their work. In some cases, Anthropic's Claude models are the only AI systems cleared for top-secret security situations through Amazon Web Services' GovCloud, according to the officials. Anthropic offers a specific service for national security customers and made a deal with the federal government to provide its services to agencies for a nominal $1 fee. The company also works with the Department of Defense, though its policies still prohibit the use of its models for weapons development. In August, OpenAI announced a competing agreement to supply more than 2 million federal executive branch workers with ChatGPT Enterprise access for $1 per agency for one year. The deal came one day after the General Services Administration signed a blanket agreement allowing OpenAI, Google, and Anthropic to supply tools to federal workers. Wedged between money and ethics The timing of the friction with the Trump administration creates complications for Anthropic as the company reportedly conducts media outreach in Washington. The administration has repeatedly positioned American AI companies as key players in global competition and expects reciprocal cooperation from these firms. However, this is not Anthropic's first known conflict with Trump administration officials. The company previously opposed proposed legislation that would have prevented US states from passing their own AI regulations. In general, Anthropic has been walking a difficult road between maintaining its company values, seeking contracts, and raising venture capital to support its business. For example, in November 2024, Anthropic announced a partnership with Palantir and Amazon Web Services to bring Claude to US intelligence and defense agencies through Palantir's Impact Level 6 environment, which handles data up to the "secret" classification level. The partnership drew criticism from some in the AI ethics community who saw it as contradictory to Anthropic's stated focus on AI safety. On the larger stage, the potential surveillance capabilities of AI language models have drawn scrutiny from security researchers. In a December 2023 Slate editorial, security researcher Bruce Schneier warned that AI models could enable unprecedented mass spying by automating the analysis and summarization of vast conversation datasets. He noted that traditional spying methods require intensive human labor, but AI systems can process communications at scale, potentially shifting surveillance from observing actions to interpreting intent through sentiment analysis. As AI models become capable of processing human communications at unprecedented scale, the battle over who gets to use them for surveillance (and under what rules) is just getting started.
[2]
Anthropic Wants to Be the One Good AI Company in Trump's America
It put limits on law enforcement usage and backed an AI safety bill. Just ignore the whole pirating a bunch of books thing. Anthropic, the artificial intelligence company behind the chatbot Claude, is trying to carve out a spot as the Good Guy in the AI space. Fresh off being the only major AI firm to throw its support behind an AI safety bill in California, the company grabbed a headline from Semafor thanks to its apparent refusal to allow its model to be used for surveillance tasks, which is pissing off the Trump administration. According to the report, law enforcement agencies have felt stifled by Anthropic's usage policy, which includes a section restricting the use of its technology for "Criminal Justice, Censorship, Surveillance, or Prohibited Law Enforcement Purposes." That includes banning the use of its AI tools to "Make determinations on criminal justice applications," "Target or track a person’s physical location, emotional state, or communication without their consent," and "Analyze or identify specific content to censor on behalf of a government organization." That's been a real problem for federal agencies, including the FBI, Secret Service, and Immigration and Customs Enforcement, per Semafor, and has created tensions between the company and the current administration, despite Anthropic giving the federal government access to its Claude chatbot and suite of AI tools for just $1. According to the report, Anthropic's policy is broad with fewer carveouts than competitors. For instance, OpenAI's usage policy restricts the "unauthorized monitoring of individuals," which may not rule out using the technology for "legal" monitoring. OpenAI did not respond to a request for comment. A source familiar with the matter explained that Anthropic's Claude is being utilized by agencies for national security purposes, including for cybersecurity, but the company's usage policy restricts uses related to domestic surveillance. A representative for Anthropic said that the company developed ClaudeGov specifically for the intelligence community, and the service has received "High" authorization from the Federal Risk and Authorization Management Program (FedRAMP), allowing its use with sensitive government workloads. The representative said Claude is available for use across the intelligence community. One administration official bemoaned to Semafor that Anthropic's policy makes a moral judgment about how law enforcement agencies do their work, which, like...sure? But also, it's as much a legal matter as a moral one. We live in a surveillance state, law enforcement can and has surveilled people without warrants in the past and will almost certainly continue to do so in the future. A company choosing not to participate in that, to the extent that it can resist, is covering its own ass just as much as it is staking out an ethical stance. If the federal government is peeved that a company's usage policy prevents it from performing domestic surveillance, maybe the primary takeaway is that the government is performing widespread domestic surveillance and attempting to automate it with AI systems. Anyway, Anthropic's theoretically principled stance is the latest in its effort to position itself as the reasonable AI firm. Earlier this month, it backed an AI safety bill in California that would require it and other major AI companies to submit to new and more stringent safety requirements to ensure models are not at risk of doing catastrophic harm. Anthropic was the only major player in the AI space to throw its weight behind the bill, which awaits Governor Newsom's signature (which may or may not come, as he previously vetoed a similar bill). The company is also in D.C., pitching rapid adoption of AI with guardrails (but emphasis on the rapid part). Its position as the chill AI company is perhaps a bit undermined by the fact that it pirated millions of books and papers that it used to train its large language model, violating the rights of the copyright holders and leaving the authors high and dry without payment. A $1.5 billion settlement reached earlier this month will put at least some money into the pockets of the people who actually created the works used to train the model. Meanwhile, Anthropic was just valued at nearly $200 billion in a recent funding round that will make that court-ordered penalty into a rounding error.
Share
Share
Copy Link
Anthropic faces criticism from the Trump administration over its policy limiting AI use in law enforcement, particularly for surveillance tasks. The company aims to position itself as an ethical AI provider while navigating government contracts and partnerships.
Anthropic, the artificial intelligence company behind the chatbot Claude, has found itself at the center of a controversy involving the Trump administration and law enforcement agencies. The company's strict usage policies, which prohibit the use of its AI models for domestic surveillance and certain law enforcement tasks, have reportedly frustrated White House officials and federal contractors working with agencies like the FBI and Secret Service
1
.Source: Gizmodo
Despite the tension, Anthropic has secured significant government contracts. The company offers its services to federal agencies for a nominal $1 fee and has partnered with the Department of Defense, albeit with restrictions on weapons development
1
. Anthropic has also collaborated with Palantir and Amazon Web Services to bring Claude to US intelligence and defense agencies through Palantir's Impact Level 6 environment, which handles data up to the "secret" classification level1
.Anthropic appears to be positioning itself as the "Good Guy" in the AI space. The company recently backed an AI safety bill in California, making it the only major AI firm to support more stringent safety requirements
2
. This move, along with its restrictions on law enforcement usage, seems to be part of a broader strategy to differentiate Anthropic from its competitors in the AI industry.Related Stories
However, Anthropic's ethical stance is not without its challenges. The company has faced criticism for pirating millions of books and papers to train its large language model, violating copyright holders' rights
2
. A recent $1.5 billion settlement aims to compensate the affected authors, but it remains a contentious issue given Anthropic's recent valuation of nearly $200 billion2
.The controversy surrounding Anthropic's policies highlights the broader debate about AI's role in surveillance and law enforcement. Security researchers like Bruce Schneier have warned about the potential for AI language models to enable unprecedented mass spying by automating the analysis of vast communication datasets
1
. As AI capabilities continue to advance, the battle over who gets to use these technologies for surveillance purposes is likely to intensify.Summarized by
Navi
[1]
06 Jun 2025•Technology
08 Nov 2024•Technology
23 May 2025•Technology