6 Sources
[1]
Anthropic forms new security council to help secure AI's place in government
On Aug. 27, Anthropic, the company behind Claude, unveiled what it calls its "National Security and Public Sector Advisory Council" -- an 11-member council that includes a former U.S. senator and intelligence chief, to guide how its models are deployed in U.S. defense and government applications. This might look like yet another Beltway advisory board, but it actually it appears to be Anthropic's way of locking in its place in the compute-hungry, deep-pocketed U.S. national security sector. Anthropic has already launched Claude Gov, a tuned-down version of its AI that "refuses less" when handling sensitive or classified queries. It has also secured a $200 million prototype contract with the Pentagon's Chief Digital and Artificial Intelligence Office alongside Google, OpenAI, and xAI. Claude Gov is live in the Lawrence Livermore National Laboratory, and is being offered to federal agencies for a symbolic $1 price tag to spur adoption. This push toward the public sector matters because training frontier models is now all about infrastructure. Anthropic's next-gen Claude models will run on "Rainier," a monster AWS supercluster powered by hundreds of thousands of Trainium 2 chips. Amazon has poured $8 billion into Anthropic and has positioned it as the flagship tenant for its custom silicon. Meanwhile, Anthropic is hedging with Google Cloud, where it taps TPU accelerators and offers Claude on the FedRAMP-compliant Vertex AI platform. By contrast, OpenAI still relies heavily on Nvidia GPUs via Microsoft Azure -- though it has started renting Google TPUs; while Elon Musk's xAI scrapped its custom Dojo wafer-level processor initiative and fell back on Nvidia and AMD hardware. Google's DeepMind remains anchored to Google's in-house TPU pipeline but has kept a lower profile in defense. Neither has assembled anything like Anthropic's new council, though. Anthropic's council can also be seen as a sign that access to compute is becoming a national security priority. The Center for a New American Security has already acknowledged that securing and extending the government's access to compute will play a "decisive role in whether the United States leads the world in AI or cedes its leadership to competitors." Nvidia Blackwell GPUs are sold out through most of 2025, export controls are unpredictable, and U.S. agencies are scrambling to secure reliable training capacity. By recruiting insiders from the Department of Energy and the intelligence community, Anthropic is aiming to secure both the hardware and policy headroom it needs to stay competitive. This strategy is risky: Tying the Claude brand to the Pentagon may alienate some users and could saddle Anthropic with political baggage. But there are also clear rewards, including steady contracts, priority access to chips, and a direct role in shaping public sector AI standards. Someone, somewhere, has made some careful calculations, and Anthropic's leadership is clearly hoping they'll pay off. Follow Tom's Hardware on Google News to get our up-to-date news, analysis, and reviews in your feeds. Make sure to click the Follow button.
[2]
Anthropic forms national security advisory council to guide AI use in government
Aug 27 (Reuters) - Artificial intelligence company Anthropic launched a National Security and Public Sector Advisory Council on Wednesday, aiming to deepen ties with Washington and allied governments as AI becomes increasingly central to defense and strategic planning. The move underscores AI firms' growing efforts to shape policies and ensure their technology supports democratic interests amid global competition. The launch also comes as rivals such as OpenAI and Google DeepMind step up engagement with governments and regulators on AI safety, though neither has announced a dedicated national security advisory council. Anthropic's council brings together former senators and senior officials from the U.S. Defense Department, intelligence agencies as well as energy and justice departments. It will advise Anthropic on integrating AI into sensitive government operations while shaping standards for security, ethics and compliance. Its members include Roy Blunt, a former senator and intelligence committee member, David S. Cohen, a former deputy CIA director, and Richard Fontaine, who leads the Center for a New American Security. Other appointees held top legal and nuclear security roles across Republican and Democratic administrations. Anthropic said the group will advise on high-impact applications in cybersecurity, intelligence analysis and scientific research, while helping set industry standards for responsible AI use. The company plans to expand the council as partnerships with public-sector institutions grow. The announcement follows Anthropic's $200 million agreement with the Pentagon last month to develop AI tools for defense, highlighting the sector's push to balance innovation with security risks. The initiative reflects intensifying global competition over AI capabilities, with Washington seeking to maintain an edge against rivals such as China and Russia. Reporting by Akash Sriram in Bengaluru; Editing by Shreya Biswas Our Standards: The Thomson Reuters Trust Principles., opens new tab
[3]
Exclusive: Anthropic taps former nuke chiefs, lawmakers for advisory council
Why it matters: The defense and intelligence communities are shelling out billions of dollars in AI and autonomy contracts. Competition for such deals is fierce. Zoom in: The bipartisan National Security and Public Sector Advisory Council comprises 11 inaugural members, including: * Lisa Gordon-Hagerty and Jill Hruby, former National Nuclear Security Administration bosses * Dave Luber, former NSA cybersecurity director and Cyber Command executive director * Patrick Shanahan, former deputy defense secretary * David Cohen, former CIA deputy director * Former Sens. Jon Tester (D-Mont.), who handled defense appropriations, and Roy Blunt (R-Mo.), who was a member of the intelligence committee What they're saying: The council will "help ensure the U.S. and allied democracies maintain decisive advantages in AI development at a critical moment, when the next two to three years will determine whether democratic or authoritarian models of AI governance shape the global order for decades," a person familiar with the operation told Axios. * Relationships with Five Eyes and Indo-Pacific countries are of particular import. Zoom out: Anthropic, Google, OpenAI and xAI each won Defense Department contracts worth as much as $200 million this summer. * "The adoption of AI is transforming the department's ability to support our warfighters and maintain strategic advantage over our adversaries," Chief Digital and AI Officer Doug Matty said at the time.
[4]
Anthropic Forms National Security Advisory Council to Guide AI Use in Government
(Reuters) -Artificial intelligence company Anthropic launched a National Security and Public Sector Advisory Council on Wednesday, aiming to deepen ties with Washington and allied governments as AI becomes increasingly central to defense and strategic planning. The move underscores AI firms' growing efforts to shape policies and ensure their technology supports democratic interests amid global competition. The launch also comes as rivals such as OpenAI and Google DeepMind step up engagement with governments and regulators on AI safety, though neither has announced a dedicated national security advisory council. Anthropic's council brings together former senators and senior officials from the U.S. Defense Department, intelligence agencies as well as energy and justice departments. It will advise Anthropic on integrating AI into sensitive government operations while shaping standards for security, ethics and compliance. Its members include Roy Blunt, a former senator and intelligence committee member, David S. Cohen, a former deputy CIA director, and Richard Fontaine, who leads the Center for a New American Security. Other appointees held top legal and nuclear security roles across Republican and Democratic administrations. Anthropic said the group will advise on high-impact applications in cybersecurity, intelligence analysis and scientific research, while helping set industry standards for responsible AI use. The company plans to expand the council as partnerships with public-sector institutions grow. The announcement follows Anthropic's $200 million agreement with the Pentagon last month to develop AI tools for defense, highlighting the sector's push to balance innovation with security risks. The initiative reflects intensifying global competition over AI capabilities, with Washington seeking to maintain an edge against rivals such as China and Russia. (Reporting by Akash Sriram in Bengaluru; Editing by Shreya Biswas)
[5]
Anthropic forms national security advisory council to guide AI use in government - The Economic Times
Anthropic's council brings together former senators and senior officials from the US Defense Department, intelligence agencies as well as energy and justice departments.Artificial intelligence company Anthropic launched a National Security and Public Sector Advisory Council on Wednesday, aiming to deepen ties with Washington and allied governments as AI becomes increasingly central to defense and strategic planning. The move underscores AI firms' growing efforts to shape policies and ensure their technology supports democratic interests amid global competition. The launch also comes as rivals such as OpenAI and Google DeepMind step up engagement with governments and regulators on AI safety, though neither has announced a dedicated national security advisory council. Anthropic's council brings together former senators and senior officials from the US Defense Department, intelligence agencies as well as energy and justice departments. It will advise Anthropic on integrating AI into sensitive government operations while shaping standards for security, ethics and compliance. Its members include Roy Blunt, a former senator and intelligence committee member, David S. Cohen, a former deputy CIA director, and Richard Fontaine, who leads the Center for a New American Security. Other appointees held top legal and nuclear security roles across Republican and Democratic administrations. Anthropic said the group will advise on high-impact applications in cybersecurity, intelligence analysis and scientific research, while helping set industry standards for responsible AI use. The company plans to expand the council as partnerships with public-sector institutions grow. The announcement follows Anthropic's $200 million agreement with the Pentagon last month to develop AI tools for defense, highlighting the sector's push to balance innovation with security risks. The initiative reflects intensifying global competition over AI capabilities, with Washington seeking to maintain an edge against rivals such as China and Russia.
[6]
Anthropic forms national security advisory council to guide AI use in government
(Reuters) -Artificial intelligence company Anthropic launched a National Security and Public Sector Advisory Council on Wednesday, aiming to deepen ties with Washington and allied governments as AI becomes increasingly central to defense and strategic planning. The move underscores AI firms' growing efforts to shape policies and ensure their technology supports democratic interests amid global competition. The launch also comes as rivals such as OpenAI and Google DeepMind step up engagement with governments and regulators on AI safety, though neither has announced a dedicated national security advisory council. Anthropic's council brings together former senators and senior officials from the U.S. Defense Department, intelligence agencies as well as energy and justice departments. It will advise Anthropic on integrating AI into sensitive government operations while shaping standards for security, ethics and compliance. Its members include Roy Blunt, a former senator and intelligence committee member, David S. Cohen, a former deputy CIA director, and Richard Fontaine, who leads the Center for a New American Security. Other appointees held top legal and nuclear security roles across Republican and Democratic administrations. Anthropic said the group will advise on high-impact applications in cybersecurity, intelligence analysis and scientific research, while helping set industry standards for responsible AI use. The company plans to expand the council as partnerships with public-sector institutions grow. The announcement follows Anthropic's $200 million agreement with the Pentagon last month to develop AI tools for defense, highlighting the sector's push to balance innovation with security risks. The initiative reflects intensifying global competition over AI capabilities, with Washington seeking to maintain an edge against rivals such as China and Russia. (Reporting by Akash Sriram in Bengaluru; Editing by Shreya Biswas)
Share
Copy Link
Anthropic forms a National Security and Public Sector Advisory Council to deepen ties with the U.S. government and shape AI policies in defense and strategic planning.
Anthropic, the company behind the AI model Claude, has taken a significant step in shaping the future of artificial intelligence in government and national security. On August 27, 2025, the company announced the formation of its "National Security and Public Sector Advisory Council," a group of 11 distinguished individuals with extensive experience in U.S. defense, intelligence, and government sectors 12.
Source: Reuters
The council brings together an impressive roster of former high-ranking officials, including:
This bipartisan group aims to advise Anthropic on integrating AI into sensitive government operations while establishing standards for security, ethics, and compliance. The council will focus on high-impact applications in cybersecurity, intelligence analysis, and scientific research 24.
The formation of this council appears to be a calculated move by Anthropic to secure its position in the lucrative and compute-intensive U.S. national security sector. The company has already made significant inroads with the launch of Claude Gov, a version of its AI tailored for government use, and a $200 million prototype contract with the Pentagon's Chief Digital and Artificial Intelligence Office 15.
Anthropic's strategy heavily relies on advanced computing infrastructure. The company's next-generation Claude models will run on "Rainier," a massive AWS supercluster powered by hundreds of thousands of Trainium 2 chips. This is backed by an $8 billion investment from Amazon. Additionally, Anthropic is diversifying its resources by partnering with Google Cloud for TPU accelerators 1.
While rivals like OpenAI and Google DeepMind are also engaging with governments on AI safety, Anthropic's dedicated national security council sets it apart. This move reflects the intensifying global competition over AI capabilities, with the U.S. government seeking to maintain an edge against competitors such as China and Russia 24.
Source: Axios
Anthropic's strategy carries both risks and potential rewards. Aligning closely with the Pentagon could alienate some users and bring political complications. However, the benefits include steady contracts, priority access to advanced computing chips, and a direct role in shaping public sector AI standards 15.
This initiative underscores the growing importance of AI in national security and government operations. It also highlights the efforts of AI companies to influence policies and ensure their technologies support democratic interests. As partnerships with public-sector institutions grow, Anthropic plans to expand the council, further cementing its role in shaping the future of AI governance 24.
Summarized by
Navi
[4]
U.S. News & World Report
|Anthropic Forms National Security Advisory Council to Guide AI Use in GovernmentMeta has been found to have created flirty chatbots impersonating celebrities without permission, including risqué content and child celebrity bots, sparking concerns over privacy, ethics, and potential legal issues.
6 Sources
Technology
11 hrs ago
6 Sources
Technology
11 hrs ago
Meta announces significant changes to its AI chatbot policies, focusing on teen safety by restricting conversations on sensitive topics and limiting access to certain AI characters.
8 Sources
Technology
11 hrs ago
8 Sources
Technology
11 hrs ago
Meta Platforms is considering collaborations with AI rivals Google and OpenAI to improve its AI applications, potentially integrating external models while developing its own Llama 5.
4 Sources
Technology
11 hrs ago
4 Sources
Technology
11 hrs ago
A group of 60 UK parliamentarians have accused Google DeepMind of breaching international AI safety commitments by delaying the release of safety information for its Gemini 2.5 Pro model.
2 Sources
Policy
11 hrs ago
2 Sources
Policy
11 hrs ago
Walmart unveils a suite of AI-powered 'super agents' and advanced digital twin technology, signaling a major shift in retail innovation and operational efficiency.
2 Sources
Technology
3 hrs ago
2 Sources
Technology
3 hrs ago