Sam Altman faces serious trust questions as OpenAI insiders allege pattern of deception

Reviewed byNidhi Govil

2 Sources

Share

A damning New Yorker investigation reveals that OpenAI's chief scientist compiled 70 pages of evidence alleging Sam Altman consistently lied to executives and the board. Multiple insiders, including former board members, describe Altman as a sociopath who manipulates people while using AI safety commitments as bargaining chips. The revelations raise urgent questions about who controls humanity's most powerful technology.

Secret Memos Expose Pattern of Lying at OpenAI

In fall 2023, Ilya Sutskever, OpenAI's chief scientist, took an extraordinary step. He compiled roughly 70 pages of Slack messages and HR documents alleging that Sam Altman, the company's CEO, had systematically misrepresented facts to executives and deceived the board about internal safety protocols

1

. The memos, sent as disappearing messages to three fellow board of directors members, began with a stark list: "Sam exhibits a consistent pattern of..." The first item was simply "Lying"

1

.

Source: New Yorker

Source: New Yorker

Sutskever, who had once officiated Greg Brockman's wedding at OpenAI's offices with a robotic hand as ring bearer, had grown deeply concerned as the company approached what he believed was Artificial General Intelligence (AGI). "I don't think Sam is the guy who should have his finger on the button," he told another board member

1

. The materials included images taken with cellphones, apparently to avoid detection on company devices, underscoring the fear among those raising concerns about Sam Altman's integrity

1

.

Board Members Call Altman a Sociopath

Multiple OpenAI insiders went further than questioning trustworthiness. One board member was strikingly blunt in their diagnosis, telling The New Yorker that Altman is "unconstrained by truth" and possesses "almost a sociopathic lack of concern for the consequences that may come from deceiving someone"

2

. This assessment echoes warnings from Aaron Swartz, the famed coder and hacktivist who knew Altman from their 2005 Y Combinator batch. Before his death in 2013, Swartz told friends: "You need to understand that Sam can never be trusted. He is a sociopath. He would do anything"

2

.

Source: Futurism

Source: Futurism

Board members Helen Toner, an AI policy expert, and Tasha McCauley, an entrepreneur, received Ilya Sutskever's memos as confirmation of what they'd already concluded: despite OpenAI's non-profit mission to prioritize humanity's safety over commercial success, Altman could not be trusted with civilization-altering technology

1

.

AI Safety as a Bargaining Chip

The investigation portrays a manipulative and deceptive figure who dangles AI safety commitments to win over concerned engineers, then reneges on promises. Anthropic CEO Dario Amodei, who left OpenAI over differences with Altman, documented one stark example. During 2019 negotiations for a billion-dollar Microsoft investment, Amodei showed Altman a ranked list of safety demands to address anxiety that Microsoft might override OpenAI's safety commitments. Altman agreed to all items

2

.

But when the deal closed in June, Amodei discovered a provision that negated the top demand. When confronted, Altman denied the provision existed—even after Amodei read it aloud to him

2

. This pattern of using AI safety as a bargaining chip, then abandoning commitments, appears central to leadership concerns at the company.

Strained Relationships with Microsoft and Industry Partners

Altman's alleged deception extends beyond OpenAI. Multiple Microsoft executives described him as repeatedly going back on his word, straining his relationship with CEO Satya Nadella. "He has misrepresented, distorted, renegotiated, reneged on agreements," one executive said

2

. Earlier this year, on the same day OpenAI reaffirmed Microsoft as exclusive provider for memoryless AI models, it announced a $50 billion deal with Amazon as exclusive reseller of its "Frontier" platform for AI agents—prompting Microsoft to signal willingness to sue for breach of contract

2

.

One tech executive who has worked with Altman described his persuasive abilities as "Jedi mind tricks," adding: "He's just next level"

2

. This singular ability to convince skeptics he shares their priorities, whether they're engineers or the public, has been central to his power.

Questions About OpenAI's Founding Mission

OpenAI was established with an unusual premise: given AI's existential risk, the company structured itself as a nonprofit whose board had a duty to prioritize humanity's safety over commercial success or even survival

1

. The founders—including Altman, Sutskever, Brockman, and Elon Musk—asserted this required a CEO of uncommon integrity. Sutskever warned that "the people who end up in these kinds of positions are often a certain kind of person, someone who is interested in power, a politician, someone who likes it"

1

."

Former board member Sue Yoon offered a different view than the sociopath characterization, suggesting Altman isn't a "Machiavellian villain" but rather someone who deludes himself into believing his ever-shifting sales pitches. "He's too caught up in his own self-belief," she said. "So he does things that, if you live in the real world, make no sense. But he doesn't live in the real world"

2

. Whether calculated lying or self-deception, the pattern raises urgent questions about whether OpenAI can fulfill its founding mission with Altman at the helm.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo