3 Sources
3 Sources
[1]
OpenAI Staffer Quits, Alleging Company's Economic Research Is Drifting Into AI Advocacy
OpenAI has allegedly become more guarded about publishing research that highlights the potentially negative impact that AI could have on the economy, four people familiar with the matter tell WIRED. The perceived pullback has contributed to the departure of at least two employees on OpenAI's economic research team in recent months, according to the same four people, who spoke to WIRED on the condition of anonymity. One of these employees, Tom Cunningham, left the company entirely in September after concluding it had become difficult to publish high-quality research, WIRED has learned. In a parting message shared internally, Cunningham wrote that the team faced a growing tension between conducting rigorous analysis and functioning as a de facto advocacy arm for OpenAI, according to sources familiar with the situation. Cunningham declined WIRED's request for comment. OpenAI chief strategy officer Jason Kwon addressed these concerns in an internal memo following Cunningham's departure. In a copy of the message obtained by WIRED, Kwon argued that OpenAI must act as a responsible leader in the AI sector and should not only raise problems with the technology, but also "build the solutions." "My POV on hard subjects is not that we shouldn't talk about them," Kwon said on Slack. "Rather, because we are not just a research institution, but also an actor in the world (the leading actor in fact) that puts the subject of inquiry (AI) into the world, we are expected to take agency for the outcomes." In a statement to WIRED, OpenAI spokesperson Rob Friedlander said the company hired its first chief economist, Aaron Chatterji, last year and has since expanded the scope of its economic research. "The economic research team conducts rigorous analysis that helps OpenAI, policymakers, and the public understand how people are using AI and how it is shaping the broader economy, including where benefits are emerging and where societal impacts or disruptions may arise as the technology evolves," Friedlander said. The alleged shift comes as OpenAI deepens its multibillion-dollar partnerships with corporations and governments, cementing itself as a central player in the global economy. Experts believe the technology OpenAI is developing could transform how people work, although there are still large questions about when this change will happen and to what extent it will impact people and global markets. Since 2016, OpenAI has regularly released research on how its own systems could reshape labor and shared data with outside economists. In 2023 it copublished "GPTs Are GPTs," a widely cited paper investigating which sectors were likely going to be most vulnerable to automation. Over the past year, however, two sources say the company has become more reluctant to release work that highlights the economic downsides of AI -- such as job displacement -- and has favored publishing positive findings.
[2]
Former OpenAI employees say they left because the company was 'too restrictive' about AI research.
According to a report from Wired, sources at OpenAI say the company "has become more reluctant to release work that highlights the economic downsides of AI." At least two employees have reportedly left as a result of research restrictions, including former researcher Tom Cunningham: "In a parting message shared internally, Cunningham wrote that the team faced a growing tension between conducting rigorous analysis and functioning as a de facto advocacy arm for OpenAI, according to sources familiar with the situation."
[3]
OpenAI Accused of Self-Censoring Research That Paints AI In a Bad Light
OpenAI is allegedly self-censoring its research on the negative impact of AI, and it's even led to the departure of at least two employees. According to a new report from WIRED, OpenAI has become "more guarded" about publishing the negative findings of its economic research team, like data on all the jobs that AI might replace. Employees are allegedly quitting over this, including data scientist Tom Cunningham, who now works as a researcher at METR, a nonprofit that develops evaluations to test AI models against public safety threats. According to the report, Cunningham wrote in an internal message at the time of his recent departure that the economic research team was pretty much functioning as OpenAI's advocacy arm. OpenAI began as a research lab, but has since gone through quite an evolution as the company shifted focus towards its commercial products that generate billions of dollars in revenue. The company's economic research operations are reportedly being managed by OpenAI's first chief economist, Aaron Chatterji, who was hired late last year. Under Chatterji, the team recently shared its findings that AI use could save the average worker 40 to 60 minutes a day. According to the WIRED report, Chatterji reports to OpenAI's chief global affairs officer, Chris Lehane, who has earned himself the reputation of "master of disaster" with his work for former President Bill Clinton (and years later for Airbnb and Coinbase), and is largely considered the expert on damage control. This isn't the first time OpenAI has been accused of favoring product over safety research. Just last month, a New York Times report accused OpenAI of being well aware of the inherent mental health risks of addictive AI chatbot design and still choosing to pursue it. It's also not the first time a former employee has deemed OpenAI's research review to be too harsh. Last year, the company's former head of policy research, Miles Brundage, shared that he was leaving because the publishing constraints had "become too much." "OpenAI is now so high-profile, and its outputs reviewed from so many different angles, that it's hard for me to publish on all the topics that are important to me," Brundage shared in a Substack post. Not only is artificial intelligence changing every aspect of modern-day society, but it is also already proving to have a colossal impact on the economy. AI spend is probably propping up the entire American economy right now, according to some reports. And while the jury's still out on just how effectively and to what extent AI can take over jobs, early research says that AI is already crushing the early career job market. Even Fed chair Jerome Powell has admitted that AI is "probably a factor" in current unemployment rates. At the core of this outsized impact of AI is OpenAI. The company is at the heart of a tangled web of multibillion-dollar dealmaking, and ChatGPT is such a central product that it has become almost synonymous with the word "AI chatbot." OpenAI is also the centerpiece of Stargate, the Trump administration's mysterious but massive AI data center buildout plan. Trump and his officials have stood squarely behind the positive potential of AI, while casting aside concerns echoed by competitors like Anthropic, such as fear-mongering or doomerism. OpenAI executives have also been caught up in an industry-wide divide over AI safety playing out on Capitol Hill. OpenAI President Greg Brockman is one of the top backers of "Leading the Future," a super-PAC that views most AI safety regulation as an obstacle to innovation.
Share
Share
Copy Link
OpenAI is facing internal turmoil as employees leave over concerns the company has become reluctant to publish research highlighting AI's negative economic impacts. Former researcher Tom Cunningham departed in September, citing growing tension between rigorous analysis and functioning as OpenAI's advocacy arm. The controversy raises questions about research integrity as the AI leader deepens multibillion-dollar partnerships.
OpenAI has allegedly grown more guarded about publishing AI research that reveals potentially negative economic impacts, according to four people familiar with the matter who spoke to WIRED
1
. This perceived shift toward self-censoring research has contributed to employee departures from the company's economic research team in recent months, marking a significant moment for the AI leader as it balances commercial interests with scientific rigor.Tom Cunningham, a data scientist on OpenAI's economic research team, left the company entirely in September after concluding it had become difficult to publish high-quality research
1
. In a parting message shared internally, Cunningham wrote that the team faced growing tension between conducting rigorous analysis and functioning as a de facto advocacy arm for OpenAI, according to sources familiar with the situation2
. Cunningham now works as a researcher at METR, a nonprofit that develops evaluations to test AI models against public safety threats3
.
Source: Wired
Following Cunningham's departure, OpenAI chief strategy officer Jason Kwon addressed these concerns in an internal memo obtained by WIRED
1
. Kwon argued that OpenAI must act as a responsible leader in the AI sector and should not only raise problems with the technology but also "build the solutions." He emphasized on Slack that because OpenAI is "not just a research institution, but also an actor in the world (the leading actor in fact) that puts the subject of inquiry (AI) into the world, we are expected to take agency for the outcomes"1
.OpenAI spokesperson Rob Friedlander defended the company's approach, stating that OpenAI hired its first chief economist, Aaron Chatterji, last year and has since expanded the scope of its economic research
1
. Under Chatterji, who reports to OpenAI's chief global affairs officer Chris Lehane, the team recently shared findings that AI use could save the average worker 40 to 60 minutes a day3
. Lehane has earned a reputation as a "master of disaster" through his work for former President Bill Clinton and later for Airbnb and Coinbase, and is largely considered an expert on damage control3
.This isn't the first time OpenAI has faced accusations of favoring commercial products over safety research. Last year, the company's former head of policy research, Miles Brundage, shared that he was leaving because the publishing constraints had "become too much"
3
. "OpenAI is now so high-profile, and its outputs reviewed from so many different angles, that it's hard for me to publish on all the topics that are important to me," Brundage wrote in a Substack post3
.Since 2016, OpenAI has regularly released research on how its systems could reshape labor markets and shared data with outside economists
1
. In 2023, the company copublished "GPTs Are GPTs," a widely cited paper investigating which sectors were most vulnerable to automation1
. However, over the past year, two sources say the company has become more reluctant to release work that highlights the economic downsides of AI—such as job displacement—and has favored publishing positive findings1
.Related Stories
The alleged shift comes as OpenAI deepens its multibillion-dollar partnerships with corporations and governments, cementing itself as a central player in the global economy
1
. OpenAI is at the heart of Stargate, the Trump administration's massive AI data center buildout plan, and ChatGPT has become almost synonymous with the term "AI chatbot"3
. The company's evolution from research lab to commercial powerhouse has raised questions about research integrity as AI's impact on labor markets becomes increasingly apparent. Even Federal Reserve chair Jerome Powell has admitted that AI is "probably a factor" in current unemployment rates3
. OpenAI executives have also been caught up in an industry-wide divide over AI safety playing out on Capitol Hill, with President Greg Brockman backing a super-PAC that views most AI safety regulation as an obstacle to innovation3
. As policymakers and the public seek to understand the negative impacts of AI, the balance between AI advocacy and transparent research remains critical for shaping how society prepares for technological disruption.
Source: Gizmodo
Summarized by
Navi
[2]
1
Science and Research

2
Policy and Regulation

3
Technology
