2 Sources
2 Sources
[1]
'Find a way to work together' -- Sam Altman's message to the Department of Defense and Anthropic
Altman weighs in on why AI companies can't afford to keep escalating their fights with government * Sam Altman urged the government and Anthropic to de-escalate tensions and work together on AI governance * He argued that governments should hold power over AI and national security decisions * He said he still mostly trusts the government, while accepting that many don't Relations between Anthropic and the U.S. government have become an unusually combustible flashpoint in the broader fight over AI regulations and control. The escalating fight began when negotiations with the Pentagon over how Anthropic's Claude AI model could be used broke down over the company's refusal to remove safeguards against fully autonomous weapons or mass domestic surveillance. Responses from Washington, including an executive directive banning federal agencies from using Anthropic's technology and labeling the company a "supply chain risk," led to lawsuits alleging constitutional violations, and a federal judge has since temporarily blocked the Pentagon's actions. OpenAI CEO Sam Altman apparently sees harmony as necessary on both ends of the argument. "Find a way to work together. like stop, stop the stuff on both, stop the escalation on both sides and find a way to work together," Altman said in an interview with Laurie Segall. AI's security demands AI companies have hyped the technology's potential in realms like national security, even as they lobby for a light regulatory touch. Altman has apparently concluded that the companies cannot have it both ways. If AI is as geopolitically consequential as everyone keeps insisting, then governments are going to want a hand on the wheel. "I don't think it works for our industry to say, Hey, this is the most powerful technology humanity has ever built," Altman said. "It is going to be the high order bit in geopolitics. It is going to be the greatest cyber weapon the world has ever built. It is going to, you know, be the determinant of future wars and protection. And we are not giving it to you." Of course, whether people feel comfortable with the government controlling such consequential technology is another question. Altman said he still mostly trusts the system of checks and balances, though he did acknowledge that many people currently "really don't trust the government to follow the law." It's a position that stands out compared to some AI leaders who are more suspicious of the government. Nonetheless, he thinks it would be a mistake not to help the government with national security, especially in cyber infrastructure. "I think we have to work with government, but the intensity of the current mood of mistrust, I was miscalibrated on and I understand something there now," he said. Trust in AI control Essentially, Altman and others aligned with him want to work with governments, even as the public distrust over the misuse of AI grows. "One of the most important questions the world will have to answer in the next year is, are AI companies or are governments more powerful? And I think it's very important that the governments are more powerful," Altman said. "The future of the world, and the decisions about the most important elements of national security should be made through a democratically elected process. And the people that have been appointed as part of that process, not me, and not the CEO of some other lab." Altman kept coming back to the issue of the way the power of AI is arriving faster than institutions, governments, or most humans can calibrate for it. The systems are getting more capable, and their potential for misuse grows in tandem. The stakes are higher and more serious all the time. Big fights among those who are supposed to devise safe regulations and the companies, at least theoretically, trying to steer the technology in an ethical direction, represent an enormous problem. A diplomatic shrug urging diametrically opposed sides to "find a way to work together" won't likely resolve matters. Still, at least it means Altman knows the answer won't be obvious, even if he phrased it as a request to ChatGPT. Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button! And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.
[2]
Anthropic's dispute with US government exposes deeper rifts over AI governance, risk and control - SiliconANGLE
Anthropic's dispute with US government exposes deeper rifts over AI governance, risk and control The escalating dispute between Anthropic PBC and the U.S. Department of Defense is exposing a fundamental tension in the artificial intelligence market: who ultimately controls how powerful AI systems are used. What began as a contracting and policy disagreement has evolved into a broader debate over national security, corporate responsibility and the limits of self-governance in emerging technologies. At the center of the conflict is the Pentagon's designation of Anthropic as a "supply chain risk," a move that effectively bars the company's models from use in defense-related systems. President Donald Trump later ordered all federal agencies to stop doing business with Anthropic. That decision has been challenged in court and is now under a preliminary injunction, but its implications are already reverberating across enterprise information technology and AI development practices. A Gartner Inc. report in late March said the episode underscores how deeply embedded AI models have become in software systems and the vulnerabilities to policy shocks that creates. "Anthropic's exclusion underscores how quickly embedded model dependencies can convert into structural technical debt," the firm wrote, noting that even minor changes in model behavior can require "broad functional revalidation" and potentially disrupt production systems. At the heart of the dispute is Anthropic's insistence on restricting how its models can be used, particularly in areas such as mass surveillance and autonomous weapons. That stance has triggered a wider debate over whether private companies should define ethical boundaries for technologies with societal and geopolitical implications. SiliconANGLE contacted numerous AI experts and industry executives. Though most declined to comment on the politically loaded issue, those who agreed to be quoted largely backed Anthropic's right to dictate restrictions on the use of its technology. Several argued that the Pentagon's framing of the issue as a supply chain risk is overstated. The conflict appears less about security vulnerabilities and more about disagreements over acceptable use, said David Linthicum, a cloud and AI subject matter expert. "If a company says it does not want its AI used for certain military or domestic surveillance purposes, that is a policy and governance issue," he said. Carlos Montemayor, a philosophy professor at San Francisco State University, took a more critical view of the government's position, suggesting the designation may be punitive. "The government is punishing Anthropic for not following orders," he said, calling the move unjustified and potentially a signal to other AI providers to align with federal expectations. That divergence in interpretation reflects a broader ambiguity: Should AI systems be treated like interchangeable software components or as strategic assets subject to tighter alignment with state priorities? Linthicum supports giving companies the right and responsibility to set limits. "If a company builds powerful technology, it has every right to say what it will and will not support," he said. However, he emphasized that those decisions shouldn't occur in isolation. Governments, courts and customers all have roles in shaping acceptable use. Valence Howden, an advisory fellow at Info-Tech Research Group Inc., echoed that view, arguing that organizations "have a responsibility to define the ethical boundaries and use cases of their technologies," particularly as AI systems take on more autonomous roles. Others were less comfortable with corporate self-regulation, though. Montemayor argued that allowing companies to set their own ethical frameworks is "unacceptable and dangerous," given the scale and impact of AI systems. "From an ethical perspective, companies should not dictate from their narrow engineering and commercial point of view what is right or wrong for societies around the globe," he said. Montemayor called for international regulation grounded in human rights principles, warning that current approaches create "too much uncertainty about the future of this technology." Gartner analysts suggest that these decisions often come down to business tradeoffs. Contractual restrictions on how technology can be used are common but enforcing them is difficult. In Anthropic's case, limitations around autonomous weapons may reflect not only ethical concerns but also technical constraints. "Frontier AI systems are simply not reliable enough to power fully autonomous weapons," wrote Anthropic Chief Executive Dario Amodei. At first glance, broad government restrictions on doing business with Anthropic may appear to be a devastating blow to the company, but despite the potential loss of lucrative government contracts, several experts believe Anthropic's stance could strengthen its position in the enterprise market. Marc Fernandez, chief strategy officer at Neurologyca Science & Marketing SL, framed the issue in terms of long-term trust. "Holding the line on restrictions is going to be expensive [for Anthropic]in the short term," he said, but clear boundaries can signal reliability in high-stakes environments. "Over time, that kind of reliability becomes a massive competitive advantage." Linthicum agreed that consistency matters. "A lot of enterprise customers want to know that a vendor has clear values and will stick to them under pressure," he said. Anthropic's position could thus make it "more attractive to many customers, not less," provided its policies are clearly defined and consistently applied. Info-Tech's Howden also highlighted the trust factor, noting that maintaining restrictions "has likely benefited them Anthropic an industry that hasn't always been built on trust and honesty." Some observers said the dispute reflects a deeper misunderstanding of what AI systems are and how they should be governed. Anaconda Inc. Chief Executive David DeSanto, noted in a LinkedIn post that the Pentagon appears to be treating AI like "the next version of Microsoft Excel -- a tool you buy, own and use however you want," he said. "But that's not what this technology is." Unlike spreadsheets, AI systems are capable of "judgment and autonomous action," requiring new governance frameworks that can't be retrofitted onto existing procurement and oversight models. That gap, DeSanto said, is evident not only in government but across enterprises, where leaders often assume they can "bolt AI onto existing infrastructure and figure out the hard stuff like governance responsibilities later." Anaconda Field Chief Technology Officer Steve Croce warned against "normalization of deviance," or the tendency for organizations to lower their guard as long as systems continue to function without obvious failures. "When companies like Anthropic start to pull back safety standards, it sets a precedent," he wrote. Enterprises need to prioritize "AI sovereignty," or the ability to define and enforce their own guardrails, rather than relying on external providers. Beyond the ethical and political dimensions, the Anthropic dispute is likely to force organizations to confront practical challenges in AI adoption, Gartner notes. Unlike productivity software, replacing a model is not simply a matter of switching back ends. It often requires requalifying entire workflows, retraining systems and recalibrating performance benchmarks. "A forced model swap is not just a verification task," the firm noted. "It is a requalification of the AI-dependent system." This creates a paradox: Organizations that invest heavily in optimizing AI-driven workflows may achieve higher productivity, but face greater disruption when policy changes force them to switch providers. As a result, Gartner recommends that engineering leaders treat "provider volatility as an immediate continuity risk" and design systems for portability, modularity and rapid substitution. It's clear that AI is no longer just a technical issue but a governance challenge that cuts across business strategy, national security and societal values. The outcome of this dispute will likely help shape how those often competing priorities are balanced in the years ahead.
Share
Share
Copy Link
OpenAI CEO Sam Altman calls for de-escalation between Anthropic and the U.S. government as their dispute over AI safety standards and military applications intensifies. The conflict began when Anthropic refused to remove safeguards against autonomous weapons and mass surveillance, leading to a Pentagon ban labeling the company a supply chain risk. Altman argues governments must hold power over AI and national security decisions.
OpenAI CEO Sam Altman has urged Anthropic and the US government to "find a way to work together" as their dispute over AI governance escalates into a broader confrontation about who controls powerful AI systems
1
. Speaking in an interview with Laurie Segall, Altman called for both sides to "stop the escalation" and pursue collaboration, reflecting growing concerns that the AI industry cannot simultaneously claim geopolitical significance while refusing government oversight1
.
Source: TechRadar
The conflict centers on Anthropic's refusal to remove safeguards from its Claude AI model that prevent use in fully autonomous weapons or mass domestic surveillance applications. When negotiations with the U.S. Department of Defense broke down, the Pentagon designated Anthropic as a supply chain risk, effectively barring federal agencies from using the company's technology
1
. President Donald Trump later expanded this restriction through an executive directive banning all federal agencies from doing business with Anthropic2
.A federal judge has since temporarily blocked the Pentagon ban through a preliminary injunction, but the implications continue to reverberate across the AI industry
2
. Gartner Inc. noted in a late March report that the episode highlights how deeply embedded AI models have become in software systems and their vulnerability to policy shocks. "Anthropic's exclusion underscores how quickly embedded model dependencies can convert into structural technical debt," the firm wrote, warning that even minor changes in model behavior can require broad functional revalidation2
.The standoff exposes fundamental questions about corporate responsibility in AI development. David Linthicum, a cloud and AI subject matter expert, argued that the Pentagon's framing as a supply chain risk is overstated. "If a company says it does not want its AI used for certain military or domestic surveillance purposes, that is a policy and governance issue," he said
2
. Carlos Montemayor, a philosophy professor at San Francisco State University, went further, suggesting the government is "punishing Anthropic for not following orders"2
.Altman's position stands out among AI leaders for its emphasis on government authority over AI systems control. "One of the most important questions the world will have to answer in the next year is, are AI companies or are governments more powerful? And I think it's very important that the governments are more powerful," he stated
1
. He argued that decisions about national security and geopolitics should be made through democratically elected processes rather than by CEOs of AI labs.The OpenAI chief acknowledged the complexity of the situation, recognizing that public trust in government has eroded. "I think we have to work with government, but the intensity of the current mood of mistrust, I was miscalibrated on and I understand something there now," he said
1
. Despite this, Altman maintained he still mostly trusts the system of checks and balances, even while accepting that many people "really don't trust the government to follow the law."
Source: SiliconANGLE
Related Stories
The dispute raises critical questions about whether private companies should define ethical boundaries for technologies with societal implications. Anthropic CEO Dario Amodei has defended the company's restrictions, noting that "Frontier AI systems are simply not reliable enough to power fully autonomous weapons"
2
. This technical constraint combines with ethical concerns to justify the company's stance on military applications.Valence Howden, an advisory fellow at Info-Tech Research Group Inc., supported Anthropic's approach, arguing that organizations "have a responsibility to define the ethical development boundaries and use cases of their technologies," particularly as AI systems take on more autonomous roles
2
. However, Montemayor warned against corporate self-governance, calling it "unacceptable and dangerous" given the scale and impact of AI systems. He advocated for international regulation grounded in human rights principles2
.Altman highlighted the fundamental contradiction in the AI industry's position: companies cannot simultaneously claim their technology represents the most powerful force in human history while refusing to share it with democratically elected governments. "I don't think it works for our industry to say, Hey, this is the most powerful technology humanity has ever built. It is going to be the high order bit in geopolitics... And we are not giving it to you," he said
1
. The stakes continue to rise as AI safety standards, mass surveillance concerns, and questions about who controls these transformative technologies remain unresolved.Summarized by
Navi
[1]
03 Mar 2026•Policy and Regulation

14 Feb 2026•Policy and Regulation

30 Jan 2026•Policy and Regulation

1
Technology

2
Science and Research

3
Science and Research
