2 Sources
2 Sources
[1]
Sam Altman May Control Our Future -- Can He Be Trusted?
In the fall of 2023, Ilya Sutskever, OpenAI's chief scientist, sent secret memos to three fellow-members of the organization's board of directors. For weeks, they'd been having furtive discussions about whether Sam Altman, OpenAI's C.E.O., and Greg Brockman, his second-in-command, were fit to run the company. Sutskever had once counted both men as friends. In 2019, he'd officiated Brockman's wedding, in a ceremony at OpenAI's offices that included a ring bearer in the form of a robotic hand. But as he grew convinced that the company was nearing its long-term goal -- creating an artificial intelligence that could rival or surpass the cognitive capabilities of human beings -- his doubts about Altman increased. As Sutskever put it to another board member at the time, "I don't think Sam is the guy who should have his finger on the button." At the behest of his fellow board members, Sutskever worked with like-minded colleagues to compile some seventy pages of Slack messages and H.R. documents, accompanied by explanatory text. The material included images taken with a cellphone, apparently to avoid detection on company devices. He sent the final memos to the other board members as disappearing messages, to insure that no one else would ever see them. "He was terrified," a board member who received them recalled. The memos, which we reviewed, have not previously been disclosed in full. They allege that Altman misrepresented facts to executives and board members, and deceived them about internal safety protocols. One of the memos, about Altman, begins with a list headed "Sam exhibits a consistent pattern of . . ." The first item is "Lying." Many technology companies issue vague proclamations about improving the world, then go about maximizing revenue. But the founding premise of OpenAI was that it would have to be different. The founders, who included Altman, Sutskever, Brockman, and Elon Musk, asserted that artificial intelligence could be the most powerful, and potentially dangerous, invention in human history, and that perhaps, given the existential risk, an unusual corporate structure would be required. The firm was established as a nonprofit, whose board had a duty to prioritize the safety of humanity over the company's success, or even its survival. The C.E.O. had to be a person of uncommon integrity. According to Sutskever, "any person working to build this civilization-altering technology bears a heavy burden and is taking on unprecedented responsibility." But "the people who end up in these kinds of positions are often a certain kind of person, someone who is interested in power, a politician, someone who likes it." In one of the memos, he seemed concerned with entrusting the technology to someone who "just tells people what they want to hear." If OpenAI's C.E.O. turned out not to be reliable, the board, which had six members, was empowered to fire him. Some members, including Helen Toner, an A.I.-policy expert, and Tasha McCauley, an entrepreneur, received the memos as a confirmation of what they had already come to believe: Altman's role entrusted him with the future of humanity, but he could not be trusted.
[2]
Inside Sources Say Sam Altman Is a Sociopath
Can't-miss innovations from the bleeding edge of science and tech You don't build a trillion dollar AI empire by being a saint. In a seeping new investigative piece from The New Yorker, numerous tech insiders paint a picture of OpenAI CEO Sam Altman as a relentless liar who wants everyone to like him while manipulating even the people closest to him to get what he wants. AI safety, in this slippery portrait of Altman, is merely a bargaining chip he dangles like a carrot to get concerned engineers -- and anyone else worried about the tech's far-reaching consequences -- on board, before going back on his word. Some of these insiders were strikingly blunt in their diagnoses: Altman was a literal "sociopath," one OpenAI board member alleged. "He's unconstrained by truth," they told The New Yorker. "He has two traits that are almost never seen in the same person. The first is a strong desire to please people, to be liked in any given interaction. The second is almost a sociopathic lack of concern for the consequences that may come from deceiving someone." Aaron Swartz, the famed coder and hacktivist who died by suicide in 2013, used similar language to describe Altman. Swartz had been batchmates with Altman in the inaugural class of 2005 at the Silicon Valley incubator Y Combinator, and warned his friends about Altman shortly before his passing. "You need to understand that Sam can never be trusted," he told one confidante. "He is a sociopath. He would do anything." Altman, it's worth noting, has been accused by his sister in a civil suit of repeatedly sexually abusing her beginning when she was three-year-old and when he was 12. Altman, his mother, and his brothers all deny the claims. The New Yorker piece characterizes Altman as more of a businessman than an engineer, leveraging an almost singular ability to get skeptics, be they engineers or the public, to believe that he holds the same priorities as them. "He's unbelievably persuasive. Like, Jedi mind tricks," a tech executive who has worked with Altman told The New Yorker. "He's just next level." One alleged victim of Altman's double dealing is Anthropic CEO Dario Amodei, who used to work at OpenAI but left to found his own safety-focused AI company over differences with Altman. In notes viewed by The New Yorker, Amodei wrote about negotiating a billion-dollar investment from Microsoft in 2019. Many at the company were reportedly anxious that Microsoft would override OpenAI's safety commitments, and Amodei made sure to address this by showing Altman a ranked list of safety demands, which Altman agreed to. But when the deal was closing in June, Amodei discovered a provision had been added that obviated the top demand on the list. Amodei confronted Altman about this, but Altman denied the provision existed, even after Amodei read the provision aloud to him. Another is Microsoft CEO Satya Nadella. Multiple executives at the Redmond giant described Altman as repeatedly going back on his word, straining his long-standing relationship with Nadella. "He has misrepresented, distorted, renegotiated, reneged on agreements," one executive told The New Yorker. An example from earlier this year: on the same day OpenAI reaffirmed Microsoft as the exclusive provider for its memoryless AI models, it announced a $50 billion deal with Amazon as its exclusive reseller of its "Frontier" platform for AI agents. (Microsoft signalled it was willing to sue over this alleged breach of contract.) Sue Yoon, a former OpenAI board member dished a slightly different, but no less unflattering, view of Altman than the "sociopath" picture. Altman was "not this Machiavellian villain," she said, but was able to delude himself to in believing his ever-shifting sales pitches. "He's too caught up in his own self-belief," she told The New Yorker. "So he does things that, if you live in the real world, make no sense. But he doesn't live in the real world."
Share
Share
Copy Link
A damning New Yorker investigation reveals that OpenAI's chief scientist compiled 70 pages of evidence alleging Sam Altman consistently lied to executives and the board. Multiple insiders, including former board members, describe Altman as a sociopath who manipulates people while using AI safety commitments as bargaining chips. The revelations raise urgent questions about who controls humanity's most powerful technology.
In fall 2023, Ilya Sutskever, OpenAI's chief scientist, took an extraordinary step. He compiled roughly 70 pages of Slack messages and HR documents alleging that Sam Altman, the company's CEO, had systematically misrepresented facts to executives and deceived the board about internal safety protocols
1
. The memos, sent as disappearing messages to three fellow board of directors members, began with a stark list: "Sam exhibits a consistent pattern of..." The first item was simply "Lying"1
.
Source: New Yorker
Sutskever, who had once officiated Greg Brockman's wedding at OpenAI's offices with a robotic hand as ring bearer, had grown deeply concerned as the company approached what he believed was Artificial General Intelligence (AGI). "I don't think Sam is the guy who should have his finger on the button," he told another board member
1
. The materials included images taken with cellphones, apparently to avoid detection on company devices, underscoring the fear among those raising concerns about Sam Altman's integrity1
.Multiple OpenAI insiders went further than questioning trustworthiness. One board member was strikingly blunt in their diagnosis, telling The New Yorker that Altman is "unconstrained by truth" and possesses "almost a sociopathic lack of concern for the consequences that may come from deceiving someone"
2
. This assessment echoes warnings from Aaron Swartz, the famed coder and hacktivist who knew Altman from their 2005 Y Combinator batch. Before his death in 2013, Swartz told friends: "You need to understand that Sam can never be trusted. He is a sociopath. He would do anything"2
.
Source: Futurism
Board members Helen Toner, an AI policy expert, and Tasha McCauley, an entrepreneur, received Ilya Sutskever's memos as confirmation of what they'd already concluded: despite OpenAI's non-profit mission to prioritize humanity's safety over commercial success, Altman could not be trusted with civilization-altering technology
1
.The investigation portrays a manipulative and deceptive figure who dangles AI safety commitments to win over concerned engineers, then reneges on promises. Anthropic CEO Dario Amodei, who left OpenAI over differences with Altman, documented one stark example. During 2019 negotiations for a billion-dollar Microsoft investment, Amodei showed Altman a ranked list of safety demands to address anxiety that Microsoft might override OpenAI's safety commitments. Altman agreed to all items
2
.But when the deal closed in June, Amodei discovered a provision that negated the top demand. When confronted, Altman denied the provision existed—even after Amodei read it aloud to him
2
. This pattern of using AI safety as a bargaining chip, then abandoning commitments, appears central to leadership concerns at the company.Related Stories
Altman's alleged deception extends beyond OpenAI. Multiple Microsoft executives described him as repeatedly going back on his word, straining his relationship with CEO Satya Nadella. "He has misrepresented, distorted, renegotiated, reneged on agreements," one executive said
2
. Earlier this year, on the same day OpenAI reaffirmed Microsoft as exclusive provider for memoryless AI models, it announced a $50 billion deal with Amazon as exclusive reseller of its "Frontier" platform for AI agents—prompting Microsoft to signal willingness to sue for breach of contract2
.One tech executive who has worked with Altman described his persuasive abilities as "Jedi mind tricks," adding: "He's just next level"
2
. This singular ability to convince skeptics he shares their priorities, whether they're engineers or the public, has been central to his power.OpenAI was established with an unusual premise: given AI's existential risk, the company structured itself as a nonprofit whose board had a duty to prioritize humanity's safety over commercial success or even survival
1
. The founders—including Altman, Sutskever, Brockman, and Elon Musk—asserted this required a CEO of uncommon integrity. Sutskever warned that "the people who end up in these kinds of positions are often a certain kind of person, someone who is interested in power, a politician, someone who likes it"1
."Former board member Sue Yoon offered a different view than the sociopath characterization, suggesting Altman isn't a "Machiavellian villain" but rather someone who deludes himself into believing his ever-shifting sales pitches. "He's too caught up in his own self-belief," she said. "So he does things that, if you live in the real world, make no sense. But he doesn't live in the real world"
2
. Whether calculated lying or self-deception, the pattern raises urgent questions about whether OpenAI can fulfill its founding mission with Altman at the helm.Summarized by
Navi
[1]
[2]
1
Technology

2
Science and Research

3
Science and Research
