5 Sources
5 Sources
[1]
Anthropic is launching a new think tank amid Pentagon blacklist fight
Amid a weekslong conflict with the Pentagon, resulting in a blacklist and a lawsuit, Anthropic is shaking up its C-suite and research initiatives. The company announced Wednesday that it's launching a new internal think tank, called the Anthropic Institute, that combines three of Anthropic's current research teams. It will focus on researching AI's large-scale implications, such as "what happens to jobs and economies, whether AI makes us safer or introduces new dangers, how its values might shape ours, and whether we can retain control," per the company.
[2]
Anthropic is opening an office in DC while battling Pentagon in court
Anthropic has launched a new research initiative called Anthropic Institute and has revealed that its Public Policy team is opening its first office in Washington, DC this spring. The company has made the announcement just a couple of days after it sued the US government to challenge the supply chain risk designation it received from the Defense Department. As Axios notes, Anthropic is tripling its Public Policy team at a time when AI companies are establishing a presence in Washington, so that they can influence future policies around artificial intelligence. In Anthropic's case, it might have to find a way to be re-accepted by the US government first after President Trump ordered federal agencies to stop using its technology. Sarah Heck, who joined the company as Head of External Affairs, will take over from co-founder Jack Clark as Head of Policy. Meanwhile, Clark has taken the role as Head of Public Benefit and will lead the Anthropic Institute. The company explains that the institute's role is to "tell the world" what it learns about the challenges that arise as AI firms develop more advanced AI systems. Examples include how powerful AI technologies will reshape jobs and economies and what kinds of threats they'll magnify or introduce. The institute will bring together and expand Anthropic's current research teams: The Frontier Red Team that stress-tests AI systems, the Societal Impacts team that looks at how AI is used in the real world, and the Economic Research team that tracks AI's impact on jobs and the larger economy. Anthropic has hired Matt Botvinick, a former Senior Director of Research at Google DeepMind, and Zoë Hitzig, who studied AI's social and economic impacts at OpenAI, to be founding members of the Institute.
[3]
Anthropic ramps up its D.C. presence
Why it matters: AI companies are spending heavily and expanding to Washington to influence the policies that will define the technology's future. Driving the news: Anthropic is tripling its policy team and opening a permanent office in D.C. this spring to engage policymakers and think tanks long term. * Expect Anthropic to continue advocating for export controls on advanced chips, a "clear" federal AI regulation framework, energy ratepayer protections and model transparency, a spokesperson said. "AI is advancing faster than any technology in history, and the window to get policy right is closing," said Sarah Heck, Anthropic's new head of public policy. * "We're growing a bipartisan team in Washington because we believe smart policy can accelerate American innovation and economic growth -- not slow it down." * Heck is stepping into the role that co-founder Jack Clark previously held and will report to president and co-founder Daniela Amodei. In his new role as head of public benefit, Clark will lead The Anthropic Institute, a new research venture where he will oversee three teams: * The Frontier Red Team will stress-test AI systems to understand their limits and capabilities. * The Societal Impacts team will study how AI is being used in the real world. * The Economic Research team will track how AI is impacting jobs and the larger economy. The intrigue: The Anthropic Institute has hired Zoë Hitzig, formerly of OpenAI, and Anton Korinek, a University of Virginia professor on leave. * Matt Botvinick, formerly of Google DeepMind, will lead work on AI and the rule of law. Another research team to better understand how AI will interact with the legal system is in the works. * The federal affairs team includes registered lobbyists from both parties. The big picture: Under intense scrutiny from the Trump administration, Anthropic is building a permanent Washington operation and signaling that the debate over the industry's future is just beginning.
[4]
Anthropic launches an institute to tackle AI risks - SiliconANGLE
Anthropic PBC today announced the formation of the Anthropic Institute, a business unit tasked with studying the risks posed by artificial intelligence. The unit will bring three of the company's existing teams together under the leadership of co-founder Jack Clark. As part of the move, Anthropic has appointed the executive as its head of public benefit. The first team that forms part of the Anthropic Institute is known as the Frontier Red Team. It's responsible for studying AI-related cybersecurity risks. In one recent project, the unit used Claude to scan Firefox's code base for vulnerabilities. It later tested whether the AI can autonomously develop ways of exploiting the bugs that it finds. The Anthropic Institute also includes the company's Societal Impacts team. The latter unit collects data on how users interact with Claude. Last month, it published a study that evaluated why and when workers allow AI agents to perform tasks in a fully autonomous manner. The third unit that Anthropic is folding into the Anthropic Institute is called Economic Research. As the name suggests, it studies the economic impact of AI. The unit is responsible for publishing Anthropic's Economic Index report, which contains data on what business activities its customers are automating with Claude. Besides bringing the three business units under one roof, the company also plans to grow their headcount. As part of the recruiting drive, Anthropic has hired Matt Botvinick, a former senior director of research at Google DeepMind. He is joining the company alongside former OpenAI Group PBC researcher Zoë Hitzig and economics professor Anton Korinek. Anthropic stated that Korinek will lead a project focused on understanding how AI could "reshape the very nature of economic activity." Hitzig, in turn, is "joining to connect our economics work to model training and development." The Anthropic Institute is also working on a number of other projects. According to the company, the unit is seeking to predict future AI progress and understand how technology may interact with the legal system in the future. In parallel with the effort to grow the Anthropic Institute, the company plans to boost the headcount of an existing team called Public Policy. The unit is responsible for, among other things, drafting the AI-related policy suggestions that the software maker occasionally shares with lawmakers. Those suggestions focus on topics such as the manner in which AI infrastructure investments are regulated. Anthropic has appointed former Stripe Inc. executive Sarah Heck to lead the Public Policy team. According to the company, the next step will be to open an office for the unit in Washington, D.C this spring. The company plans to follow up the move by expanding its policy work in international markets.
[5]
Anthropic Institute wants to warn us on how AI is bad for human civilization
The company racing to build the most powerful AI just created an institute to study the damage. That's not a contradiction Anthropic seems particularly embarrassed about. The San Francisco-based AI lab announced the Anthropic Institute, a new research body led by co-founder Jack Clark, tasked with confronting what the company calls "the most significant challenges that powerful AI will pose to our societies." Jobs, economies, national security, governance, the rule of law, the Institute wants to study all of it. The timing is pointed, Anthropic believes transformative AI isn't decades away. It thinks it's arriving in the next two years. Also read: The LPG crisis is real and your kitchen needs a backup plan: Here's what you should do The Institute isn't starting from scratch. It pulls together three existing Anthropic teams - the Frontier Red Team, which stress-tests AI for dangerous capabilities; Societal Impacts, which tracks real-world AI use; and Economic Research, which studies what AI is doing to jobs and labour markets. New efforts on AI forecasting and AI's interactions with the legal system are also in the works. Clark, who will now serve as Anthropic's Head of Public Benefit, is bringing in serious outside talent. A Princeton professor of neural computation joining to lead work on AI and the rule of law, a University of Virginia economics professor studying how transformative AI could reshape economic activity itself. What makes the Institute unusual is the point it claims. Anthropic argues, not unreasonably, that the people building frontier AI have access to information about its risks that nobody else does. The Institute intends to use that access to report "candidly" - their word - about what they're learning. The pitch is that transparency from the inside is more valuable than analysis from the outside. Also read: Germany builds 25,000 sq ft "Robot Gym" to train hundreds of humanoid robots Whether you buy that depends on how much faith you have in a company policing its own existential concerns. Anthropic's entire brand is built on being the responsible actor in a field full of cowboys, which is either genuinely reassuring or the most sophisticated marketing in Silicon Valley, depending on where you are standing. Creating a public-facing institute to broadcast what you're learning about AI's societal risks is consistent with that positioning, and also happens to be excellent for the brand. None of that makes the work less necessary. The questions the Institute wants to tackle - who governs recursive self-improvement, who gets told when it begins, how societies absorb displacement at AI speed - are real and urgent. Someone should be asking them seriously. It might as well be the people who started the clock.
Share
Share
Copy Link
Anthropic unveiled the Anthropic Institute, a new research body combining three existing teams to study how AI will reshape jobs, economies, and governance. The announcement comes as the company fights a Pentagon blacklist, triples its policy team, and opens its first Washington D.C. office this spring under new leadership.
Anthropichas launched the Anthropic Institute, a new internal think tank designed to examine the risks associated with artificial intelligence as the company simultaneously battles a Pentagon blacklist and expands its Washington presence
1
. The institute will focus on AI's large-scale implications, including what happens to jobs and economies, whether AI systems make society safer or introduce new dangers, and whether humans can retain maintaining control over AI systems1
.
Source: Axios
The timing is particularly notable given Anthropic's ongoing legal fight with the Defense Department over a supply chain risk designation that resulted in President Trump ordering federal agencies to stop using the company's technology
2
. Despite these challenges, the company is making aggressive moves to influence future AI policies through expanded operations in the nation's capital.Co-founder Jack Clark has transitioned from his role as Head of Policy to become Head of Public Benefit, where he will lead the Anthropic Institute
2
. Sarah Heck, a former Stripe executive who recently joined as Head of External Affairs, has taken over Clark's previous position as Head of Public Policy4
.Anthropicis tripling its Public Policy office team and opening a permanent office in Washington D.C. this spring to engage policymakers and think tanks long term
3
. "AI is advancing faster than any technology in history, and the window to get policy right is closing," Heck stated, emphasizing that the company is building a bipartisan team to ensure smart policy can accelerate American innovation rather than slow it down3
.The Anthropic Institute consolidates three existing research teams to study the societal impacts of AI comprehensively. The Frontier Red Team, responsible for stress-testing AI systems to understand their limits and capabilities, recently used Claude to scan Firefox's code base for vulnerabilities and tested whether AI can autonomously develop exploitation methods
4
3
.
Source: The Verge
The Societal Impacts team collects data on how users interact with Claude in real-world scenarios. Last month, it published research evaluating why and when workers allow AI agents to perform tasks autonomously
4
. The Economic Research team tracks the economic effects of AI on jobs and broader markets, publishing Anthropic's Economic Index report that details what business activities customers are automating with Claude4
.Related Stories
Anthropichas recruited significant talent to strengthen the institute's capabilities. Matt Botvinick, a former senior director of research at Google DeepMind, will lead work on AI and the rule of law, with plans to build another research team examining how AI will interact with legal systems
3
4
. Zoë Hitzig, who previously studied AI's social and economic impacts at OpenAI, is joining to connect economics work to model training and development2
4
.Anton Korinek, a University of Virginia economics professor on leave, will lead a project focused on understanding how AI could "reshape the very nature of economic activity"
3
4
. The institute is also working on projects to predict future AI progress and understand technology's interaction with governance systems.Anthropicplans to continue advocating for export controls on advanced chips, a clear federal AI regulation framework, energy ratepayer protections, and model transparency
3
. The federal affairs team includes registered lobbyists from both parties, reflecting the company's bipartisan approach to policy engagement3
. Following the Washington D.C. office opening, Anthropic intends to expand its policy work in international markets4
.
Source: Engadget
The company argues that those building frontier AI have access to information about its risks that nobody else possesses, positioning the institute as uniquely capable of reporting candidly about what it learns
5
. Anthropic believes transformative AI isn't decades away but arriving within the next two years, making the institute's work particularly urgent5
. Whether this transparency from inside the industry proves more valuable than external analysis remains to be seen, but the questions the institute tackles—who governs AI advancement, who gets informed when critical thresholds are crossed, how societies absorb AI-driven displacement—demand serious attention as AI safety concerns intensify across the sector.Summarized by
Navi
[3]
12 Feb 2026•Policy and Regulation

28 Jun 2025•Business and Economy

12 Feb 2026•Policy and Regulation

1
Technology

2
Policy and Regulation

3
Business and Economy
