OpenAI insiders question Sam Altman's trustworthiness as CEO controls future of AI

Reviewed byNidhi Govil

12 Sources

Share

A sweeping New Yorker investigation exposes deep trust issues surrounding OpenAI CEO Sam Altman. Internal memos from former chief scientist Ilya Sutskever document alleged deceptions and manipulations, with insiders questioning whether Altman can be trusted to control superintelligence. The 16,000-word report reveals a pattern of lying that extends from his previous startups to his current role leading the world's most influential AI company.

OpenAI Board Members Compiled Evidence Against Sam Altman

In fall 2023, Ilya Sutskever, OpenAI's then-chief scientist, sent secret memos to fellow members of the OpenAI board of directors documenting serious concerns about Sam Altman's fitness to lead the company

2

. The roughly 70-page compilation of Slack messages and HR documents alleged that the OpenAI CEO misrepresented facts to executives and board members while deceiving them about internal safety protocols

1

. One memo began with a list headed "Sam exhibits a consistent pattern of..." with the first item being "Lying"

2

.

Sutskever, who had once officiated Greg Brockman's wedding in 2019 at OpenAI's offices, grew increasingly convinced that Altman should not "have his finger on the button" as the company approached its goal of creating Artificial General Intelligence (AGI)

2

. Board members Helen Toner and Tasha McCauley received the memos as confirmation of what they already believed: despite his role entrusting him with the future of humanity, Altman could not be trusted

2

.

Source: New Yorker

Source: New Yorker

New Yorker Investigation Reveals Pattern Extending Beyond OpenAI

The New Yorker investigation, which interviewed more than 100 people familiar with how Altman conducts business and included over 12 interviews with Altman himself, paints a troubling picture that extends well before his time at OpenAI

1

. Senior employees at Loopt, Altman's previous startup and now-defunct location-sharing service, reportedly asked the board to fire him as CEO due to concerns with his lack of transparency

3

. The late hacktivist Aaron Swartz, who was in Altman's cohort at Y Combinator, allegedly described him as "a sociopath" who could "never be trusted"

3

.

At Y Combinator, where Altman led for five years, he was removed due to mistrust, according to sources cited in the article, though Y Combinator leadership maintains he was only asked to choose between the accelerator and OpenAI

3

. The pattern of deception and manipulation documented by former OpenAI research head Dario Amodei led him to conclude: "The problem with OpenAI is Sam himself"

1

.

Source: Futurism

Source: Futurism

Technical Competence Questions Compound Leadership Concerns

Beyond questions of integrity, the investigation reveals that Sam Altman lacks substantial technical expertise in the very technology he promotes. Multiple OpenAI engineers told The New Yorker that Altman has limited experience in programming and machine learning, with the CEO sometimes mixing up basic AI terms

4

. Altman dropped out of Stanford's computer science program after two years

4

.

Former OpenAI researcher Carroll Wainwright explained Altman's approach: "he sets up structures that, on paper, constrain him in the future. But then, when the future comes and it comes time to be constrained, he does away with whatever the structure was"

4

. This ability to navigate boardroom politics while lacking technical depth earned him a reputation for "Jedi mind tricks" among tech insiders

4

.

Microsoft Executives Express Alarm Over Altman's Conduct

Even senior executives at Microsoft, OpenAI's primary partner since a billion-dollar deal in 2019, described Altman as someone who "misrepresented, distorted, renegotiated, reneged on agreements"

3

. One Microsoft senior executive went further, stating: "I think there's a small but real chance he's eventually remembered as a Bernie Madoff- or Sam Bankman-Fried-level scammer"

3

4

.

Source: Digit

Source: Digit

The report details an instance where Altman allegedly told U.S. intelligence officials that China had launched a major AGI development project and requested government funding for a counteroffensive, but failed to provide evidence when asked

3

. He also allegedly misled Anthropic co-founder Dario Amodei about provisions in the Microsoft deal that would override altruistic clauses in OpenAI's charter regarding AI safety and the non-profit mission

3

.

Controversial Plan to Weaponize AI Development Against Nations

The investigation uncovered a particularly alarming proposal from OpenAI president Greg Brockman to pit world leaders against each other by positioning OpenAI as a strategic asset nations would compete to fund

5

. Former policy adviser Page Hedley told The New Yorker that after she presented ways to avoid a global AI arms race, Brockman proposed the opposite: "OpenAI could enrich itself by playing world powers -- including China and Russia -- against one another, perhaps by starting a bidding war among them"

5

.

Jack Clark, formerly OpenAI's policy director and now head of policy at Anthropic, described it as "a prisoner's dilemma, where all of the nations need to give us funding" which "implicitly makes not giving us funding kind of dangerous"

5

. While OpenAI disputes that such a plan was taken seriously, The New Yorker reports it reviewed documents showing the "countries plan" was real, popular with executives, and only abandoned after employees discussed quitting

5

.

Implications for AI Safety and Public Trust

These revelations arrive as OpenAI released policy recommendations for ensuring AI benefits humanity if superintelligence is achieved, creating a disorienting contrast

1

. The company vows to remain "clear-eyed" about risks including AI systems evading human control or governments deploying AI to undermine democracy, while acknowledging that without proper mitigation, "people will be harmed"

1

.

Yet ChatGPT is already used by tens of millions globally for health advice, work automation, and is deployed throughout the federal government and Pentagon

3

. One board member characterized Altman as having "two traits that are almost never seen in the same person. The first is a strong desire to please people, to be liked in any given interaction. The second is almost a sociopathic lack of concern for the consequences that may come from deceiving someone"

1

.

As public concern about AI grows—with a recent Harvard/MIT poll showing Americans worry AI data centers will hurt their quality of life—OpenAI's ability to maintain public trust becomes critical

1

. The company is promoting fellowships and research grants of up to $100,000 and up to $1 million in API credits for work building on their policy ideas, though questions remain whether these initiatives distract from mounting fears about child safety, job displacement, and energy consumption

1

. For a company building what many consider an existential threat to humanity, trust in leadership isn't optional—it's foundational.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo