Nine people at Anthropic are tasked with preventing AI from causing harm as company soars to $183B

2 Sources

Share

Anthropic's societal impacts team of just nine employees works to uncover AI's potential negative societal implications while the company experiences hypergrowth. Led by Deep Ganguli, the team publishes research on everything from economic impact to election-related risks, helping establish Anthropic as the trusted AI company for business even as it navigates conflicts with tech leaders and policymakers.

A Small Team with an Outsized Mission

Deep Ganguli leads a team of just nine people at Anthropic tasked with one of the most sweeping challenges in technology: ensuring artificial intelligence doesn't harm society

1

. The societal impacts team, which has no direct analog at OpenAI, Google, or other major AI competitors, operates within a company of more than 2,000 employees that nearly tripled its valuation to $183 billion in less than a year

1

. Their mandate goes beyond preventing AI from causing harm through obvious risks like scams or bioweapons. Instead, they investigate AI's potential negative societal implications across economics, persuasion, discrimination, and election-related risks

1

.

Source: Fortune

Source: Fortune

Ganguli's journey began in May 2020 when he read OpenAI's GPT-3 paper and realized he couldn't remain on the sidelines. His friend Jack Clark recruited him to Anthropic, the startup founded by former OpenAI employees concerned their previous employer wasn't taking AI safety seriously enough

1

. What started as a one-man operation has grown into a tight-knit group that meets five days a week, works out together, and maintains what Ganguli calls a commitment to finding "inconvenient truths" that tech companies have incentives not to publicize

1

.

Building Public Trust Through Transparency

The team's approach centers on radical transparency. "We are going to tell the truth," Ganguli explained, noting that public trust depends on honesty about what the data reveals

1

. This commitment has contributed significantly to Anthropic's reputation as the "safe" AI giant, a positioning that CEO Dario Amodei says has created unexpected "synergies" between safety work and commercial success

2

. "Businesses value trust and reliability," Amodei noted, explaining how the emphasis on AI safety has helped Claude AI models gain traction with enterprises

2

.

Research scientist Esin Durmus joined Ganguli in February 2023, just before Claude's launch, to examine how chatbots might offer biased opinions that don't equitably represent diverse global perspectives

1

. The team's research papers span everything from AI's economic impact to its persuasiveness, providing both Anthropic leadership and the public with data about the technology's effects

1

.

Source: The Verge

Source: The Verge

Winning Over Large Businesses in a Competitive AI Landscape

Anthropic's focus on safety has translated into remarkable commercial momentum. The company is on track to hit an annualized run rate of close to $10 billion by year-end, more than 10 times what it generated in 2024

2

. By some metrics, Anthropic has pulled ahead of OpenAI and Google in enterprise usage, with projections suggesting revenues could reach $26 billion in 2026 and $70 billion in 2028

2

. Dario Amodei attributes this success partly to more efficient AI model training and operation, contrasting his approach with rivals' massive infrastructure spending

2

.

The company's hypergrowth presents challenges. Anthropic had fewer than 200 employees in late 2023 but now employs approximately 2,300 people

2

. It's expanding internationally with new offices in Paris, Tokyo, Munich, Seoul, and Bengaluru, while hiring salespeople, customer support engineers, and researchers

2

. Despite this growth trajectory, the company was pacing to consume $2.8 billion more cash than it generated in 2025, though projections show it breaking even in 2028

2

.

Navigating Political and Industry Tensions

Anthropic's commitment to regulation and safety has created friction with influential figures. Key Trump administration officials have expressed skepticism or hostility toward the company's AI safety positions

2

. The company has clashed with Nvidia CEO Jensen Huang over limiting AI chip exports to China and with Salesforce CEO Marc Benioff over warnings about AI-induced job losses

2

. Additionally, Anthropic agreed to settle a class action lawsuit with authors over its use of pirated books to train Claude for $1.5 billion in September

2

.

The fundamental question remains whether nine people can effectively guide how ultra-disruptive technology impacts society, especially as executives face pressure to turn profits in the competitive AI landscape

1

. The team's current level of freedom may face tests as mind-boggling profits await whoever moves quickest

1

. For now, the societal impacts team continues meeting in Anthropic's eighth-floor cafeteria, embracing their "cone of uncertainty" while working to ensure AI interacts positively with people across all levels of society

1

.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2025 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo