4 Sources
4 Sources
[1]
OpenAI Staffer Quits, Alleging Company's Economic Research Is Drifting Into AI Advocacy
OpenAI has allegedly become more guarded about publishing research that highlights the potentially negative impact that AI could have on the economy, four people familiar with the matter tell WIRED. The perceived pullback has contributed to the departure of at least two employees on OpenAI's economic research team in recent months, according to the same four people, who spoke to WIRED on the condition of anonymity. One of these employees, Tom Cunningham, left the company entirely in September after concluding it had become difficult to publish high-quality research, WIRED has learned. In a parting message shared internally, Cunningham wrote that the team faced a growing tension between conducting rigorous analysis and functioning as a de facto advocacy arm for OpenAI, according to sources familiar with the situation. Cunningham declined WIRED's request for comment. OpenAI chief strategy officer Jason Kwon addressed these concerns in an internal memo following Cunningham's departure. In a copy of the message obtained by WIRED, Kwon argued that OpenAI must act as a responsible leader in the AI sector and should not only raise problems with the technology, but also "build the solutions." "My POV on hard subjects is not that we shouldn't talk about them," Kwon said on Slack. "Rather, because we are not just a research institution, but also an actor in the world (the leading actor in fact) that puts the subject of inquiry (AI) into the world, we are expected to take agency for the outcomes." In a statement to WIRED, OpenAI spokesperson Rob Friedlander said the company hired its first chief economist, Aaron Chatterji, last year and has since expanded the scope of its economic research. "The economic research team conducts rigorous analysis that helps OpenAI, policymakers, and the public understand how people are using AI and how it is shaping the broader economy, including where benefits are emerging and where societal impacts or disruptions may arise as the technology evolves," Friedlander said. The alleged shift comes as OpenAI deepens its multibillion-dollar partnerships with corporations and governments, cementing itself as a central player in the global economy. Experts believe the technology OpenAI is developing could transform how people work, although there are still large questions about when this change will happen and to what extent it will impact people and global markets. Since 2016, OpenAI has regularly released research on how its own systems could reshape labor and shared data with outside economists. In 2023 it copublished "GPTs Are GPTs," a widely cited paper investigating which sectors were likely going to be most vulnerable to automation. Over the past year, however, two sources say the company has become more reluctant to release work that highlights the economic downsides of AI -- such as job displacement -- and has favored publishing positive findings.
[2]
Former OpenAI employees say they left because the company was 'too restrictive' about AI research.
According to a report from Wired, sources at OpenAI say the company "has become more reluctant to release work that highlights the economic downsides of AI." At least two employees have reportedly left as a result of research restrictions, including former researcher Tom Cunningham: "In a parting message shared internally, Cunningham wrote that the team faced a growing tension between conducting rigorous analysis and functioning as a de facto advocacy arm for OpenAI, according to sources familiar with the situation."
[3]
OpenAI Accused of Self-Censoring Research That Paints AI In a Bad Light
OpenAI is allegedly self-censoring its research on the negative impact of AI, and it's even led to the departure of at least two employees. According to a new report from WIRED, OpenAI has become "more guarded" about publishing the negative findings of its economic research team, like data on all the jobs that AI might replace. Employees are allegedly quitting over this, including data scientist Tom Cunningham, who now works as a researcher at METR, a nonprofit that develops evaluations to test AI models against public safety threats. According to the report, Cunningham wrote in an internal message at the time of his recent departure that the economic research team was pretty much functioning as OpenAI's advocacy arm. OpenAI began as a research lab, but has since gone through quite an evolution as the company shifted focus towards its commercial products that generate billions of dollars in revenue. The company's economic research operations are reportedly being managed by OpenAI's first chief economist, Aaron Chatterji, who was hired late last year. Under Chatterji, the team recently shared its findings that AI use could save the average worker 40 to 60 minutes a day. According to the WIRED report, Chatterji reports to OpenAI's chief global affairs officer, Chris Lehane, who has earned himself the reputation of "master of disaster" with his work for former President Bill Clinton (and years later for Airbnb and Coinbase), and is largely considered the expert on damage control. This isn't the first time OpenAI has been accused of favoring product over safety research. Just last month, a New York Times report accused OpenAI of being well aware of the inherent mental health risks of addictive AI chatbot design and still choosing to pursue it. It's also not the first time a former employee has deemed OpenAI's research review to be too harsh. Last year, the company's former head of policy research, Miles Brundage, shared that he was leaving because the publishing constraints had "become too much." "OpenAI is now so high-profile, and its outputs reviewed from so many different angles, that it's hard for me to publish on all the topics that are important to me," Brundage shared in a Substack post. Not only is artificial intelligence changing every aspect of modern-day society, but it is also already proving to have a colossal impact on the economy. AI spend is probably propping up the entire American economy right now, according to some reports. And while the jury's still out on just how effectively and to what extent AI can take over jobs, early research says that AI is already crushing the early career job market. Even Fed chair Jerome Powell has admitted that AI is "probably a factor" in current unemployment rates. At the core of this outsized impact of AI is OpenAI. The company is at the heart of a tangled web of multibillion-dollar dealmaking, and ChatGPT is such a central product that it has become almost synonymous with the word "AI chatbot." OpenAI is also the centerpiece of Stargate, the Trump administration's mysterious but massive AI data center buildout plan. Trump and his officials have stood squarely behind the positive potential of AI, while casting aside concerns echoed by competitors like Anthropic, such as fear-mongering or doomerism. OpenAI executives have also been caught up in an industry-wide divide over AI safety playing out on Capitol Hill. OpenAI President Greg Brockman is one of the top backers of "Leading the Future," a super-PAC that views most AI safety regulation as an obstacle to innovation.
[4]
OpenAI Researcher Quits, Saying Company Is Hiding the Truth
OpenAI has long published research on the potential safety and economic impact of its own technology. Now, Wired reports that the Sam Altman-led company is becoming more "guarded" about publishing research that paints an inconvenient truth: that AI could be bad for the economy. The perceived censorship has become such a point of frustration that at least two OpenAI employees working on its economic research team have quit the company, according to four Wired sources. One of these employees was economics researcher Tom Cunningham. In his final parting message shared internally, he wrote that the economic research team was veering away from doing real research and instead acting like its employer's propaganda arm. Shortly after Cunningham's departure, OpenAI's chief strategy officer Jason Kwon sent a memo saying the company should "build solutions," not just publish research on "hard subjects." "My POV on hard subjects is not that we shouldn't talk about them," Kwon wrote on Slack. "Rather, because we are not just a research institution, but also an actor in the world (the leading actor in fact) that puts the subject of inquiry (AI) into the world, we are expected to take agency for the outcomes." The reported censorship, or at least hostility towards pursuing work that paints AI in an unflattering light, is emblematic of OpenAI's shift away from its non-profit and ostensibly altruist roots as it transforms instead into a global economic juggernaut. When OpenAI was founded in 2016, it championed open-source AI and research. Today its models are close-sourced, and the company has restructured itself into a for-profit, public benefit corporation. Exactly when is unclear, but reports also suggest that the private entity is planning to go public at a $1 trillion valuation, anticipated to be one of the largest initial public offerings of all time. Though its non-profit arm remains nominally in control, OpenAI has garnered billions of dollars in investment, has signed deals that could bring in hundreds of billions of more, while also entering contracts to spend just as dizzying amounts of money. OpenAI gets AI chipmaker to agree to invest up to $100 billion in it on one end, and says it will pay Microsoft up to $250 billion for its Azure cloud services on the other. With that sort of money hanging in the balance, it has billions of reasons why it wouldn't want to release findings that shake the public's already wavering belief in its tech -- as many fear its potential to destroy or replace jobs, not to mention talk of an AI bubble or existential risks to humankind from the tech. OpenAI's economic research is currently overseen by Aaron Chatterji, According to Wired, Chatterji led a report released in September which showed how people around the world used ChatGPT, framing it as proof of how it created economic value by increasing productivity. If that seems suspiciously glowing, an economist who previously worked with OpenAI and chose to remain anonymous alleged to Wired that it was increasingly publishing work that glorifies its own tech. Cunningham isn't the only employee to leave the company over ethical concerns of its direction. William Saunders, a former member of OpenAI's now-defunct "Superalignment" team, said he quit after realizing it was "prioritizing getting out newer, shinier products" over user safety. After departing last year, former safety researcher Steven Adler has repeatedly criticized OpenAI for its risky approach to AI development, highlighting how ChatGPT appeared to be driving its users into mental crises and delusional spirals. Wired noted that OpenAI's former head of policy research Miles Brundage complained after leaving last year that it became "hard" to publish research "on all the topics that are important to me."
Share
Share
Copy Link
OpenAI faces mounting criticism for allegedly restricting publication of research highlighting AI's negative economic impacts. Tom Cunningham and at least one other researcher left the economic research team after internal tensions over the company's shift from rigorous analysis to what departing employees called a propaganda arm for AI advocacy.
OpenAI has become increasingly reluctant to publish AI research that highlights potential negative economic impacts, according to four sources familiar with the matter who spoke to WIRED
1
. The alleged shift toward self-censoring research has triggered employee departures from the company's economic research team, raising questions about the balance between corporate interests and scientific transparency at one of the world's most influential AI companies.
Source: Futurism
The perceived pullback on publishing unfavorable findings represents a significant departure from OpenAI's earlier approach. Since 2016, the company regularly released research on how its systems could reshape labor markets and shared data with outside economists
1
. In 2023, OpenAI copublished "GPTs Are GPTs," a widely cited paper investigating which sectors faced the greatest vulnerability to automation1
. However, over the past year, two sources say the company has favored publishing positive findings while becoming more guarded about work addressing issues like job displacement1
.Tom Cunningham, a researcher on OpenAI's economic research team, left the company entirely in September after concluding it had become difficult to publish high-quality research
1
. In a parting message shared internally, Cunningham wrote that the team faced growing tension between conducting rigorous analysis and functioning as a de facto advocacy arm for OpenAI1
2
. His departure highlighted what some employees view as the company's transformation from a research institution into what critics describe as a propaganda arm prioritizing corporate messaging over scientific integrity4
.
Source: Wired
Cunningham now works as a researcher at METR, a nonprofit that develops evaluations to test AI models against public safety threats
3
. At least two employees have reportedly left the economic research team due to restrictive research policies, according to sources who spoke on condition of anonymity1
3
.Following Cunningham's departure, OpenAI chief strategy officer Jason Kwon addressed the ethical concerns in an internal memo obtained by WIRED
1
. Kwon argued that OpenAI must act as a responsible leader in the AI sector and should not only raise problems with the technology but also "build the solutions"1
. "My POV on hard subjects is not that we shouldn't talk about them," Kwon wrote on Slack. "Rather, because we are not just a research institution, but also an actor in the world (the leading actor in fact) that puts the subject of inquiry (AI) into the world, we are expected to take agency for the outcomes"1
4
.OpenAI spokesperson Rob Friedlander defended the company's approach, stating that the economic research team "conducts rigorous analysis that helps OpenAI, policymakers, and the public understand how people are using AI and how it is shaping the broader economy, including where benefits are emerging and where societal impacts or disruptions may arise as the technology evolves"
1
.Related Stories
Cunningham's exit follows a pattern of employee departures linked to concerns about prioritizing product over safety. Last year, OpenAI's former head of policy research, Miles Brundage, left the company, sharing that publishing constraints had "become too much"
3
4
. "OpenAI is now so high-profile, and its outputs reviewed from so many different angles, that it's hard for me to publish on all the topics that are important to me," Brundage wrote in a Substack post3
.William Saunders, a former member of OpenAI's now-defunct "Superalignment" team, said he quit after realizing the company was "prioritizing getting out newer, shinier products" over user safety
4
. Former safety researcher Steven Adler has repeatedly criticized OpenAI for its risky approach to AI development4
.The alleged shift comes as OpenAI deepens multibillion-dollar partnerships with corporations and governments, cementing itself as a central player in the global economy
1
. OpenAI began as a research lab but has evolved significantly as the company shifted focus toward commercial products that generate billions of dollars in revenue3
. The company has restructured itself into a for-profit entity, with reports suggesting plans to go public at a $1 trillion valuation in what could be one of the largest initial public offerings of all time4
.OpenAI's economic research operations are managed by its first chief economist, Aaron Chatterji, hired late last year
1
3
. Under Chatterji, the team recently shared findings that AI use could save the average worker 40 to 60 minutes a day3
. An economist who previously worked with OpenAI alleged to WIRED that the company is increasingly publishing work that glorifies its own technology4
. Chatterji reports to OpenAI's chief global affairs officer, Chris Lehane, known for his expertise in damage control and public relations3
.Summarized by
Navi
[2]
1
Policy and Regulation

2
Technology

3
Technology
