OpenAI employee departures reveal tension between rigorous AI research and corporate advocacy

4 Sources

Share

OpenAI faces mounting criticism for allegedly restricting publication of research highlighting AI's negative economic impacts. Tom Cunningham and at least one other researcher left the economic research team after internal tensions over the company's shift from rigorous analysis to what departing employees called a propaganda arm for AI advocacy.

OpenAI Accused of Restricting Research on Economic Downsides

OpenAI has become increasingly reluctant to publish AI research that highlights potential negative economic impacts, according to four sources familiar with the matter who spoke to WIRED

1

. The alleged shift toward self-censoring research has triggered employee departures from the company's economic research team, raising questions about the balance between corporate interests and scientific transparency at one of the world's most influential AI companies.

Source: Futurism

Source: Futurism

The perceived pullback on publishing unfavorable findings represents a significant departure from OpenAI's earlier approach. Since 2016, the company regularly released research on how its systems could reshape labor markets and shared data with outside economists

1

. In 2023, OpenAI copublished "GPTs Are GPTs," a widely cited paper investigating which sectors faced the greatest vulnerability to automation

1

. However, over the past year, two sources say the company has favored publishing positive findings while becoming more guarded about work addressing issues like job displacement

1

.

Internal Tensions Drive Tom Cunningham's Departure

Tom Cunningham, a researcher on OpenAI's economic research team, left the company entirely in September after concluding it had become difficult to publish high-quality research

1

. In a parting message shared internally, Cunningham wrote that the team faced growing tension between conducting rigorous analysis and functioning as a de facto advocacy arm for OpenAI

1

2

. His departure highlighted what some employees view as the company's transformation from a research institution into what critics describe as a propaganda arm prioritizing corporate messaging over scientific integrity

4

.

Source: Wired

Source: Wired

Cunningham now works as a researcher at METR, a nonprofit that develops evaluations to test AI models against public safety threats

3

. At least two employees have reportedly left the economic research team due to restrictive research policies, according to sources who spoke on condition of anonymity

1

3

.

Jason Kwon Defends OpenAI's Approach to AI Advocacy

Following Cunningham's departure, OpenAI chief strategy officer Jason Kwon addressed the ethical concerns in an internal memo obtained by WIRED

1

. Kwon argued that OpenAI must act as a responsible leader in the AI sector and should not only raise problems with the technology but also "build the solutions"

1

. "My POV on hard subjects is not that we shouldn't talk about them," Kwon wrote on Slack. "Rather, because we are not just a research institution, but also an actor in the world (the leading actor in fact) that puts the subject of inquiry (AI) into the world, we are expected to take agency for the outcomes"

1

4

.

OpenAI spokesperson Rob Friedlander defended the company's approach, stating that the economic research team "conducts rigorous analysis that helps OpenAI, policymakers, and the public understand how people are using AI and how it is shaping the broader economy, including where benefits are emerging and where societal impacts or disruptions may arise as the technology evolves"

1

.

Pattern of Employee Departures Over Safety Research

Cunningham's exit follows a pattern of employee departures linked to concerns about prioritizing product over safety. Last year, OpenAI's former head of policy research, Miles Brundage, left the company, sharing that publishing constraints had "become too much"

3

4

. "OpenAI is now so high-profile, and its outputs reviewed from so many different angles, that it's hard for me to publish on all the topics that are important to me," Brundage wrote in a Substack post

3

.

William Saunders, a former member of OpenAI's now-defunct "Superalignment" team, said he quit after realizing the company was "prioritizing getting out newer, shinier products" over user safety

4

. Former safety researcher Steven Adler has repeatedly criticized OpenAI for its risky approach to AI development

4

.

Commercial Pressures Shape Research Direction

The alleged shift comes as OpenAI deepens multibillion-dollar partnerships with corporations and governments, cementing itself as a central player in the global economy

1

. OpenAI began as a research lab but has evolved significantly as the company shifted focus toward commercial products that generate billions of dollars in revenue

3

. The company has restructured itself into a for-profit entity, with reports suggesting plans to go public at a $1 trillion valuation in what could be one of the largest initial public offerings of all time

4

.

OpenAI's economic research operations are managed by its first chief economist, Aaron Chatterji, hired late last year

1

3

. Under Chatterji, the team recently shared findings that AI use could save the average worker 40 to 60 minutes a day

3

. An economist who previously worked with OpenAI alleged to WIRED that the company is increasingly publishing work that glorifies its own technology

4

. Chatterji reports to OpenAI's chief global affairs officer, Chris Lehane, known for his expertise in damage control and public relations

3

.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo