12 Sources
12 Sources
[1]
The problem is Sam Altman": OpenAI Insiders don't trust CEO
On the same day that OpenAI released policy recommendations to ensure that AI benefits humanity if superintelligence is ever achieved, The New Yorker dropped a massive investigation into whether CEO Sam Altman can be trusted to actually follow through on OpenAI's biggest promises. Parsing the publications side by side can be disorienting. On the one hand, OpenAI said it plans to push for policies to "keep people first" as AI starts "outperforming the smartest humans even when they are assisted by AI." To achieve this, the company vows to remain "clear-eyed" and transparent about risks, which it acknowledged includes monitoring for extreme scenarios like AI systems evading human control or governments deploying AI to undermine democracy. Without proper mitigation of such risks, "people will be harmed," OpenAI warned, before describing how the company could be trusted to advocate for a future where achieving superintelligence means a "higher quality of life for all." On the other hand, The New Yorker interviewed more than 100 people familiar with how Altman conducts business. The publication also reviewed internal memos and interviewed Altman more than 12 times. The resulting story provides a lengthy counterpoint explaining why the public may struggle to trust OpenAI's CEO to "control the future" of AI, no matter how rosy the company's vision may appear. Overall, insiders painted Altman as a people-pleaser who tells others what they want to hear while questing for power in an alleged bid to always put himself first. As one board member summed up Altman, he has "two traits that are almost never seen in the same person. The first is a strong desire to please people, to be liked in any given interaction. The second is almost a sociopathic lack of concern for the consequences that may come from deceiving someone." While The New Yorker found no "smoking gun," its reporters reviewed messages from OpenAI's former chief scientist, Ilya Sutskever, and former research head, Dario Amodei, that documented "an accumulation of alleged deceptions and manipulations." Many of the incidents could be shrugged off individually, but when taken together, both men concluded that Altman was not fostering a safe environment for advanced AI, The New Yorker reported. "The problem with OpenAI," Amodei wrote, "is Sam himself." OpenAI's worried public is souring on AI Altman either disputed claims in the story or else claimed to have forgotten about certain events. He also attributed some of his shifting narratives to the changing landscape of AI and admitted that he's been conflict-avoidant in the past. But his seeming contradictions are getting harder to ignore as scrutiny of OpenAI intensifies amid growing government reliance on its models and lawsuits labeling its tech as unsafe. Perhaps most visibly to the public, Altman has recently shifted away from positioning OpenAI as a sort of savior blocking AI doomsday scenarios, instead adopting a "tone" of "ebullient optimism," The New Yorker reported. The policy recommendations echo this at times. Discussing the recommendations -- which include experimenting with shorter workweeks and creating a public wealth fund to share AI profits -- OpenAI's chief global affairs officer, Chris Lehane, confirmed to The Wall Street Journal that the company is urgently concerned about negative public opinions about AI. While announcing their big ideas to spare humanity from AI dangers, OpenAI also promoted "a pilot program of fellowships and focused research grants of up to $100,000 and up to $1 million in API credits for work that builds on these and related policy ideas." However, The New Yorker's report makes it easier to question whether the recommendations were rolled out to distract from mounting public fears about child safety, job displacement, or energy-guzzling data centers. One recent Harvard/MIT poll found that Americans' biggest concern is that powering AI will hurt their quality of life, Axios reported. Ultimately, these concerns might sway votes for Democrats and Republicans ahead of the midterm elections, the WSJ noted, as data center moratoriums that could slow AI advancement are gaining traction. For Altman and his company, getting the public to buy into their vision of AI at this critical juncture likely feels essential, since Republicans losing control of Congress could pave the way for stricter AI safety laws that The New Yorker noted that Altman has privately lobbied against. Without trust in Altman, it's likely a much harder sell to convince the public that OpenAI isn't simply saying whatever it will take to entrench its own dominance, the New Yorker suggested. What exactly is OpenAI pitching? "We don't have all, or even most of the answers," OpenAI said. Instead, the company characterized its "industrial policy for the intelligence age" as "initial ideas for an industrial policy agenda to keep people first during the transition to superintelligence." Calling for "common-sense" regulations and a public-private partnership to quickly iterate on successes, OpenAI pitched "ambitious" policy ideas to ensure that everyone can access AI and profit from it. Its bushy-tailed vision acknowledged that it hopes to achieve what society never did: guarantee Internet access and ensure AI is "fairly deployed" across the US, with everyone trained to use it. Worker protections are a focus of OpenAI's plan. Recommendations included involving workers in discussions on how AI systems work to improve productivity and make workplaces safer, as well as on how to "set clear limits on harmful uses of AI." OpenAI also suggested creating a tax on automated labor that could be used to fund core programs like Social Security, Medicaid, SNAP, and housing assistance as companies rely less on human labor. Among other enticing ideas was a plan to "incentivize employers and unions to run time-bound 32-hour/four-day workweek pilots with no loss in pay that hold output and service levels constant, then convert reclaimed hours into a permanent shorter week, bankable paid time off, or both." Additionally, OpenAI proposed a "public wealth fund" that "provides every citizen -- including those not invested in financial markets -- with a stake in AI-driven economic growth." "Returns from the Fund could be distributed directly to citizens, allowing more people to participate directly in the upside of AI-driven growth, regardless of their starting wealth or access to capital," OpenAI said. As AI takes on more tasks, humans can gravitate toward care-centric work, OpenAI suggested, recommending policy ideas to help displaced workers get training to work in health care, elderly care, daycare, or community service settings. To ensure people are attracted to those roles -- historically undervalued as women's work -- OpenAI suggested initiatives to help society recognize that caregiving is "economically valuable work." Human workers will also be needed to use AI to accelerate scientific advancements, OpenAI said. However, all these public benefits that OpenAI promises can only be realized if we build a "resilient society" that can quickly respond to risky implementations and "keep AI safe, governable, and aligned with democratic values," the company said. That aspect of OpenAI's vision requires firms like OpenAI to develop safety systems, among other efforts, that will help improve public trust in AI. And we should trust those systems will work and only interfere with these firms when actual dangers are looming, OpenAI seems to suggest. "As we progress toward superintelligence, there may come a point where a narrow set of highly capable models -- particularly those that could materially advance chemical, biological, radiological, nuclear, or cyber risks -- require stronger controls," OpenAI said. When that day arrives, OpenAI opined, there should be a global network in place to communicate emerging risks. However, only the firms with the most advanced models should be subjected to rigorous audits, so that smaller firms can still compete. That's the path to ensure no firm's dominant position can be abused to unfairly shut down rivals or weaken democratic values, OpenAI said, while insisting that public input is vital to AI's success. Altman has previously persuaded "a tech-skeptical public that their priorities, even when mutually exclusive, are also his priorities," The New Yorker reported. But for the public, which is already reporting alleged harms from OpenAI models, it might be getting harder to entertain lofty ideas from a company that is led by "the greatest pitchman of his generation," The New Yorker reported. One OpenAI researcher told The New Yorker that Altman's promises can sometimes seem like a stopgap to overcome criticism until he reaches the next benchmark. When it comes to superintelligence, some optimistic experts think it could take two years, which is longer than Elon Musk stayed at OpenAI before famously criticizing Altman's leadership and leaving to start his own AI firm. Altman "sets up structures that, on paper, constrain him in the future," the OpenAI researcher told The New Yorker. "But then, when the future comes and it comes time to be constrained, he does away with whatever the structure was."
[2]
Sam Altman May Control Our Future -- Can He Be Trusted?
In the fall of 2023, Ilya Sutskever, OpenAI's chief scientist, sent secret memos to three fellow-members of the organization's board of directors. For weeks, they'd been having furtive discussions about whether Sam Altman, OpenAI's C.E.O., and Greg Brockman, his second-in-command, were fit to run the company. Sutskever had once counted both men as friends. In 2019, he'd officiated Brockman's wedding, in a ceremony at OpenAI's offices that included a ring bearer in the form of a robotic hand. But as he grew convinced that the company was nearing its long-term goal -- creating an artificial intelligence that could rival or surpass the cognitive capabilities of human beings -- his doubts about Altman increased. As Sutskever put it to another board member at the time, "I don't think Sam is the guy who should have his finger on the button." At the behest of his fellow board members, Sutskever worked with like-minded colleagues to compile some seventy pages of Slack messages and H.R. documents, accompanied by explanatory text. The material included images taken with a cellphone, apparently to avoid detection on company devices. He sent the final memos to the other board members as disappearing messages, to insure that no one else would ever see them. "He was terrified," a board member who received them recalled. The memos, which we reviewed, have not previously been disclosed in full. They allege that Altman misrepresented facts to executives and board members, and deceived them about internal safety protocols. One of the memos, about Altman, begins with a list headed "Sam exhibits a consistent pattern of . . ." The first item is "Lying." Many technology companies issue vague proclamations about improving the world, then go about maximizing revenue. But the founding premise of OpenAI was that it would have to be different. The founders, who included Altman, Sutskever, Brockman, and Elon Musk, asserted that artificial intelligence could be the most powerful, and potentially dangerous, invention in human history, and that perhaps, given the existential risk, an unusual corporate structure would be required. The firm was established as a nonprofit, whose board had a duty to prioritize the safety of humanity over the company's success, or even its survival. The C.E.O. had to be a person of uncommon integrity. According to Sutskever, "any person working to build this civilization-altering technology bears a heavy burden and is taking on unprecedented responsibility." But "the people who end up in these kinds of positions are often a certain kind of person, someone who is interested in power, a politician, someone who likes it." In one of the memos, he seemed concerned with entrusting the technology to someone who "just tells people what they want to hear." If OpenAI's C.E.O. turned out not to be reliable, the board, which had six members, was empowered to fire him. Some members, including Helen Toner, an A.I.-policy expert, and Tasha McCauley, an entrepreneur, received the memos as a confirmation of what they had already come to believe: Altman's role entrusted him with the future of humanity, but he could not be trusted.
[3]
Anonymous Sources Detail Sam Altman’s Alleged Untrustworthiness in New Report
On Monday, The New Yorker published a lengthy investigation detailing the days leading up to and following Sam Altman's brief ousting as OpenAI's CEO. Back in late 2023, OpenAI's board of directors shocked Silicon Valley by firing Sam Altman seemingly out of the blue. Following a five-day media blitz by Altman and his supporters and a public letter demanding his return, Altman came back to the company as CEO. The board members who had orchestrated the coup were ousted and replaced with Altman allies such as economist Larry Summers and former Facebook CTO Bret Taylor, who is currently the chairman of the board at OpenAI. When Altman was reinstated as CEO, OpenAI employees began referring to the turbulent few days as "the Blip," in reference to the blip in the Marvel Cinematic Universe when the supervillain Thanos made half the world's population disappear for five years. According to the New Yorker report, citing interviews with dozens of people in the know, including Altman himself, the OpenAI executive was ousted because his own board members did not find him trustworthy enough to "have his finger on the button" of artificial superintelligence, a theoretical and highly contested super-powered future AI system that could outperform human intelligence on all fronts. The term is sometimes used interchangeably with artificial general intelligence (AGI), although it describes a step even beyond that. Following secret memos sent to fellow board members by OpenAI's then-chief scientist Ilya Sutskever, the board reportedly compiled a roughly seventy-page document evidencing Altman's "consistent pattern" of lying, including about internal safety protocols. The report says that Altman's alleged history of lying extends to a time before OpenAI as well. According to the investigation, senior employees at Altman's previous startup, a now-defunct location-sharing service called Loopt, asked the board to fire him as CEO due to concerns with his lack of transparency. The accusations followed him to startup accelerator Y Combinator, which Altman led for five years until he was removed due to mistrust, according to the sources cited in the article. Y Combinator leadership has said that he wasn't fired but was only asked to choose between the startup accelerator and OpenAI. The late hacktivist and former Reddit co-owner Aaron Swartz, who was in Altman's cohort when he first joined Y Combinator as an entrepreneur with Loopt, allegedly described him as "a sociopath" who could "never be trusted." At OpenAI, Altman was accused of lying to executives and even to government officials. The report details an instance in which Altman told U.S. intelligence officials that China had launched a major AGI development project and asked for government funding to launch a counteroffensive, but then failed to show any evidence when asked. The report also details instances of Altman allegedly gaslighting Anthropic co-founder and then-OpenAI employee Dario Amodei regarding a provision in the billion-dollar Microsoft deal OpenAI signed in 2019 that would override the altruistic clauses Amodei had included in the charter for the company. The clause in question was about AGI, and posited that if another company found a way to build it safely, then OpenAI would "stop competing with and start assisting this project," as a non-profit with a safety-first objective. OpenAI has since changed its structure to become a for-profit corporation. Even some Microsoft senior executives, with whom OpenAI has had a long partnership since the 2019 deal, described Altman as someone who "misrepresented, distorted, renegotiated, reneged on agreements." One senior executive even apparently said this of Altman: "I think there's a small but real chance he's eventually remembered as a Bernie Madoff- or Sam Bankman-Fried-level scammer." Those are alarming words to read about any executive in charge of a company as large and consequential as OpenAI, but they have even more weight considering that OpenAI is the leading company creating a technology that many, including its early employees, have defined as a possible existential threat to humanity. Under Sam Altman's leadership, OpenAI's technology has infiltrated pretty much all aspects of modern life. OpenAI's AI is used by tens of millions of people around the world for health advice, and by numerous others for everything from automating work across industries to finishing homework for students and even offering murky companionship to some lonely people who seek it. ChatGPT is used throughout the federal government as well, and Altman has also recently sold the technology to the Pentagon. This is all fueled by Altman's salesmanship. He has sold the potential and purported realities of ChatGPT to so many people, leading to an unprecedented and potentially fragile dealmaking spree that has garnered so much investment that some experts say it is propping up the entire American economy right now. The New Yorker report also claims that Altman assured the board that GPT-4 had been approved by a safety panel, which turned out to be a misrepresentation when a board member requested documentation of the approvals. Sutskever claimed in the memos that Altman also downplayed the need for safety approvals in conversation with former OpenAI CTO Mira Murati, citing the company's general counsel. But when Murati asked the general counsel about it, he said he was "confused where sam got that impression." The accusations around ChatGPT's safety features are particularly damning, considering the fallout of GPT-4o, the iteration of ChatGPT that followed GPT-4. The model's knack for sycophancy reportedly caused instances of "AI psychosis" in vulnerable users, with some cases ending in fatalities. Some of Altman's inconsistencies have been well-documented publicly, too. Time and again, the OpenAI chief has published contradictory statements on things like the merits of putting ads in AI chatbots, the need for AI regulation, or whether ChatGPT's voice feature unveiled in 2024 was inspired by Scarlett Johansson's performance in the movie "Her." Altman was also scrutinized recently over a whopping $100 billion Nvidia deal that just did not materialize as initially announced. The report also details how the company's culture vastly changed following Altman's reinstatement as CEO. Before "the Blip," the company had approached the concept of AGI cautiously, while after, AGI reportedly became a North Star for the company, with slogans like "feel the AGI" seen on merchandise around its offices. The alleged difference was seen in practice, too, as OpenAI disbanded some key teams focusing on chatbot safety, like the existential AI risk team and the superalignment team, which was co-led by Sutskever. The report comes as Altman's leadership is put under a microscope as the company begins preparing for a potential IPO. According to a recent The Information report, Altman seems to be at odds with executives once again, this time regarding OpenAI's readiness for an IPO. Altman reportedly wants to go public as soon as the fourth quarter of this year and is committing to spend $600 billion in the next five years despite expectations that OpenAI will burn more than $200 billion before it starts making money. Meanwhile, the report claims that OpenAI CFO Sarah Friar does not believe the company is ready to go public this year at all, due to the risky spending commitments. Unlike Altman, Friar reportedly does not yet believe that OpenAI's revenue growth can support its financial commitments, nor is she certain that the company will even need to pour that much money into AI servers.
[4]
Sam Altman's Coworkers Say He Can Barely Code and Misunderstands Basic Machine Learning Concepts
Can't-miss innovations from the bleeding edge of science and tech Sam Altman, OpenAI's CEO and the public face of ChatGPT, has carved out an image for himself as one of the preeminent AI whisperers of our age, whose influence supposedly extends to the White House on the strength of his ideas alone. Or at least that's the image he's managed to cultivate. A new exposé in the New Yorker paints a different portrait, and it's substantially more vexing. Drawing on interviews with numerous OpenAI insiders who worked with Altman, the article portrays the CEO not as a technical wiz, but as a skilled manipulator -- and one with a surprisingly shallow grasp of the AI systems his company is building. According to numerous engineers interviewed for the article, Altman lacks experience in both programming and in machine learning -- a shortage of expertise that becomes obvious when the CEO mixes up basic AI terms. It's important to note that Altman dropped out of a Stanford computer science program after two years. We're not here to shame anyone based on their education, but as the CEO of what may soon become the world's most valuable publicly-traded company, the myth surrounding Altman matters. Cast as the chief acolyte of the "god of scale" or as a "genius of digital tech," he enjoys a kind of cult credibility that lets him slip out of tight spots that might ensnare lesser entrepreneurs. Former OpenAI researcher Carroll Wainwright, speaking to the New Yorker, put it plainly: "he sets up structures that, on paper, constrain him in the future. But then, when the future comes and it comes time to be constrained, he does away with whatever the structure was." This knack for papering over technical shortcomings with boardroom maneuvers earned Altman a reputation as a practitioner of "Jedi mind tricks," one tech insider who worked with the CEO explained. As one senior executive at Microsoft put it to the New Yorker: "I think there's a small but real chance he's eventually remembered as a Bernie Madoff- or Sam Bankman-Fried-level scammer."
[5]
OpenAI reportedly kicked around an 'insane' plan to pit world leaders against each other like a Call of Duty villain
The company disputes the claim that such an idea was taken seriously, but ex-employees say it was real. The New Yorker has published an enormous feature about OpenAI CEO Sam Altman and his disputed trustworthiness, citing more than one person who has accused the generative AI mogul of habitual dishonesty. The 16,000 word article (available online or in The New Yorker's latest print edition) adds new context to a number of widely-reported episodes from Altman's career, including his 2023 ousting from OpenAI and return, his beef with Elon Musk, and the disintegration of his persona as a humanity-first AI safety advocate, which strained credulity from the start and is now especially comical in juxtaposition with his current role as a profit-seeking Trump ally who recently signed a deal with the US Department of War. A particularly unflattering portion of the article discusses a defunct plan to pit world leaders against each other by positioning OpenAI as a kind of nuclear weapon that they'd better compete to invest in, lest they be left behind. OpenAI denies that characterization of the discussions, calling it "ridiculous," but former OpenAI policy advisers say otherwise. One of those former advisers is OpenAI critic Page Hedley, who told The New Yorker that the idea came from OpenAI president and major Trump donor Greg Brockman. After Hedley presented ways to avoid a global AI arms race, Brockman reportedly proposed the opposite. In The New Yorker's words, the proposal was that "OpenAI could enrich itself by playing world powers -- including China and Russia -- against one another, perhaps by starting a bidding war among them." Jack Clark, who had been OpenAI's policy director when the plan was discussed and is now head of policy at competitor Anthropic, described it as "a prisoner's dilemma, where all of the nations need to give us funding" which "implicitly makes not giving us funding kind of dangerous." OpenAI says that no such plan was taken seriously, and that at most "ideas were batted around at a high level about what potential frameworks might look like to encourage cooperation between nations." The New Yorker, which says it reviewed documents from the time, reports however that the "countries plan" was real, was popular with OpenAI executives, and was only abandoned after employees discussed quitting over it. A junior researcher told the publication that, during a meeting in which the plan was discussed, they thought: "This is completely fucking insane." The big article -- here's the link again -- contains many other details about OpenAI's part in the great AI bubble and Altman's reputation among his peers, and is a decent way to pass some time while you wait for the bubble to pop so you can buy RAM again. (It may be wishful thinking on my part that RAM prices will ever go back to normal, but one has to have hope.)
[6]
Inside Sources Say Sam Altman Is a Sociopath
Can't-miss innovations from the bleeding edge of science and tech You don't build a trillion dollar AI empire by being a saint. In a seeping new investigative piece from The New Yorker, numerous tech insiders paint a picture of OpenAI CEO Sam Altman as a relentless liar who wants everyone to like him while manipulating even the people closest to him to get what he wants. AI safety, in this slippery portrait of Altman, is merely a bargaining chip he dangles like a carrot to get concerned engineers -- and anyone else worried about the tech's far-reaching consequences -- on board, before going back on his word. Some of these insiders were strikingly blunt in their diagnoses: Altman was a literal "sociopath," one OpenAI board member alleged. "He's unconstrained by truth," they told The New Yorker. "He has two traits that are almost never seen in the same person. The first is a strong desire to please people, to be liked in any given interaction. The second is almost a sociopathic lack of concern for the consequences that may come from deceiving someone." Aaron Swartz, the famed coder and hacktivist who died by suicide in 2013, used similar language to describe Altman. Swartz had been batchmates with Altman in the inaugural class of 2005 at the Silicon Valley incubator Y Combinator, and warned his friends about Altman shortly before his passing. "You need to understand that Sam can never be trusted," he told one confidante. "He is a sociopath. He would do anything." Altman, it's worth noting, has been accused by his sister in a civil suit of repeatedly sexually abusing her beginning when she was three-year-old and when he was 12. Altman, his mother, and his brothers all deny the claims. The New Yorker piece characterizes Altman as more of a businessman than an engineer, leveraging an almost singular ability to get skeptics, be they engineers or the public, to believe that he holds the same priorities as them. "He's unbelievably persuasive. Like, Jedi mind tricks," a tech executive who has worked with Altman told The New Yorker. "He's just next level." One alleged victim of Altman's double dealing is Anthropic CEO Dario Amodei, who used to work at OpenAI but left to found his own safety-focused AI company over differences with Altman. In notes viewed by The New Yorker, Amodei wrote about negotiating a billion-dollar investment from Microsoft in 2019. Many at the company were reportedly anxious that Microsoft would override OpenAI's safety commitments, and Amodei made sure to address this by showing Altman a ranked list of safety demands, which Altman agreed to. But when the deal was closing in June, Amodei discovered a provision had been added that obviated the top demand on the list. Amodei confronted Altman about this, but Altman denied the provision existed, even after Amodei read the provision aloud to him. Another is Microsoft CEO Satya Nadella. Multiple executives at the Redmond giant described Altman as repeatedly going back on his word, straining his long-standing relationship with Nadella. "He has misrepresented, distorted, renegotiated, reneged on agreements," one executive told The New Yorker. An example from earlier this year: on the same day OpenAI reaffirmed Microsoft as the exclusive provider for its memoryless AI models, it announced a $50 billion deal with Amazon as its exclusive reseller of its "Frontier" platform for AI agents. (Microsoft signalled it was willing to sue over this alleged breach of contract.) Sue Yoon, a former OpenAI board member dished a slightly different, but no less unflattering, view of Altman than the "sociopath" picture. Altman was "not this Machiavellian villain," she said, but was able to delude himself to in believing his ever-shifting sales pitches. "He's too caught up in his own self-belief," she told The New Yorker. "So he does things that, if you live in the real world, make no sense. But he doesn't live in the real world."
[7]
Sam Altman's headache: Lawsuits, controversy and investigations
ChatGPT lawsuits raise serious questions about AI accountability and safety For the better part of the past few years, Sam Altman has enjoyed being the posterboy of AI, thanks to the wonders of ChatGPT. But alongside product launches and keynote bravado, and not to mention skyrocketing valuation of OpenAI, something more consequential has been piling up. Something that Sam Altman can't avoid. An unprecedented scale of legal reckoning is staring Sam Altman in the face, for alleged personal transgressions as well as decisions he seemingly took in his capacity as CEO of OpenAI. It's not a single lawsuit, but a collection of cases and investigations that have been hounding Sam Altman. These legal cases are either in the process of getting filed or about to get started, all probing a different fault line in Sam Altman's OpenAI story. The most high-profile of these legal challenges against Sam Altman is the federal trial of Elon Musk vs Altman, which is set to begin jury selection on April 27 in Oakland. It began as a disagreement over OpenAI's direction, slowly evolving into a full-blown legal battle over intent, AI governance, and money. Elon Musk's core allegation couldn't be any simpler, he's accusing OpenAI of abandoning its non-profit roots after securing up to $50 million of his contribution from 2015 to 2018. Far from just a philosophical grievance, Elon Musk is accusing Sam Altman of fraud and breach of charitable trust. Needless to say, the stakes aren't financial, as Elon Musk is pushing for Altman's removal from OpenAI's leadership position. Running parallel to the Elon Musk case is a deeply personal and far more sensitive case accusing Sam Altman of something more heinous. Annie Altman, the younger sister of Sam Altman, has filed a civil lawsuit against her brother. Filed under Missouri's Childhood Sexual Abuse statute, Sam Altman's sister is alleging him of abusing her for years during their childhood. Sam Altman has denied the allegations and countersued his sister for defamation, arguing the lawsuit is linked from financial disputes. This case is both a question of historical accountability, while also causing Sam Altman reputational damage in the present. Unlike the Musk trial, this case lodged by Sam Altman's sister attacks his personal credibility and moral character. If the Musk case is about AI governance and the Annie Altman case about personal conduct, then all the different lawsuits linking ChatGPT to wrongful deaths are about tech responsibility at scale. In case you didn't know, there are about a dozen lawsuits accusing ChatGPT of aiding suicides. All of these lawsuits, which have now been consolidated, accuse OpenAI and Sam Altman of building an AI chatbot that contributed to suicides. These cases allege that ChatGPT encouraged harmful conduct, while failing to intervene meaningfully with its responses to people who died by suicide. In at least one of these cases, Sam Altman is accused of personally overriding safety objections to accelerate ChatGPT deployment. These cases argue whether AI CEOs can be held directly accountable for consequences of their products. Beyond all the courtroom drama staring Sam Altman in the face, there's also the quieter, slower grind of regulatory scrutiny that's also gathering pace. The US Securities and Exchange Commission is reportedly investigating whether Sam Altman exhibited a pattern of behaviour related to misleading investors. If that wasn't enough, in 2023, the US Federal Trade Commission started probing OpenAI's consumer practices - specifically around privacy, data security, and user harm. These aren't headline-grabbing trials, but they may prove just as consequential - for Sam Altman, OpenAI and the wider AI ecosystem. Individually, each of these cases targets a different aspect of what Sam Altman signifies. Everything from AI governance to personal conduct, product impact and regulatory compliance is under scrutiny. Collectively, Sam Altman is still continuing in his role of being the posterboy of AI, especially with respect to what it means to lead in the age of AI.
[8]
OpenAI staff think Sam Altman doesn't know about machine learning: Report
His leadership, including the 2023 OpenAI controversy, highlights concerns around transparency and decision-making. CEO of OpenAI, Sam Altman, has always been considered one of the most influential figures in the field of artificial intelligence. The predictions made by him regarding the future of artificial intelligence have always played an important role in shaping the discourse around the subject. However, recently, a report has cast some doubt on his credibility. Based on interviews with individuals who are currently associated with the company or were once a part of it, the report offers a nuanced depiction of a highly respected figure who may not completely understand the technology he champions. The report, published by The New Yorker, is based on interviews with people who have worked closely with Altman. Several of them describe him as a highly persuasive figure who can align different groups by speaking to their concerns. At the same time, some engineers claim he struggles with basic technical language and occasionally mixes up key concepts. Also read: Motorola Edge 50 Ultra 5G price drops by over Rs 26,000 on Amazon: Should you buy it Altman's background reflects a different path from many tech leaders. He left Stanford University in 2005 after two years of studying computer science to start his first company, Loopt. Over time, he built his reputation less as an engineer and more as a dealmaker who could bring together talent and funding. One former OpenAI researcher, Carroll Wainwright, said Altman has a rare ability to influence how people think without them noticing it. According to the report, this ability helped him guide OpenAI's growth by keeping a balance between what investors, researchers, and policymakers needed. Also read: Apple Days sale announced: iPhone 17, iPhone 17 Pro and iPhone 17 Pro Max get a huge price cut on this platform It's also important to consider the incidents of 2023 when Altman was temporarily replaced as a CEO. The reason why board members were worried about their leader at the time was their concerns regarding trust and transparency since the company became involved in designing more sophisticated AI programs. They alleged Altman to be secretive concerning the safety protocols.
[9]
Sam Altman investigation: 6 crazy revelations
The New Yorker report on Sam Altman is damning no doubt, but it's no hit job, even if it feels like one. It's a methodical, painstakingly reconstructed pattern of Altman's transgressions. It was built from internal documents, court depositions, disappearing messages and over a hundred interviews. As a result, most of the damage to Altman's reputation doesn't come from a single smoking gun. It's a gaping wound accumulated over a hundred small cuts. Of them all, here are six revelations about Altman's alleged wrongdoings that hit us the hardest. Also read: Sam Altman to blame? Why Microsoft and OpenAI are drifting apart Ilya Sutskever once officiated Greg Brockman's wedding in OpenAI's own offices. Then he spent weeks compiling roughly 70 pages of Slack messages and HR documents alleging that his friend and CEO had a consistent, documented pattern of deception. The first item on the list was a single word. Lying. He was so afraid of being caught that material was photographed on personal devices to avoid company servers. The final memos were sent to fellow board members as disappearing messages. A board member who received them said he was terrified. The Ilya Memos, never fully disclosed before this investigation, allege Altman misrepresented facts to executives and deceived them about internal safety protocols. The man who told recruits they were going to save the world had concluded the person leading that mission couldn't be trusted with it. In December 2022, Altman assured his board that controversial GPT-4 features had cleared the safety panel. Board member Helen Toner asked for documentation. There was none. The two features, one letting users fine-tune the model for specific tasks and another deploying it as a personal assistant, had never been approved. That was bad. What came next was worse. As board member Tasha McCauley was walking out of that same meeting, an employee pulled her aside. Did she know about the breach in India? She did not. Altman had spent hours briefing the board across multiple sessions and never once mentioned that Microsoft had released an early version of ChatGPT in India without completing a required safety review. A researcher at the time said it was just kind of completely ignored. The board whose entire mandate was safety oversight had to find out in a corridor. Also read: Sam Altman in 2023: AI that lies has "magic" WilmerHale, the firm that handled the internal investigations of Enron and WorldCom, was brought in to review the circumstances of Altman's firing. It, as you would expect, cleared him. It also produced no written report whatsoever. Findings were delivered as oral briefings only, apparently on the advice of the personal attorneys of the two new board members. Six people close to the inquiry said it appeared designed to limit transparency, focused narrowly on clear criminality rather than the integrity questions that had actually motivated the firing. OpenAI announced the outcome in 800 words on its website. The most powerful AI company in the world had its CEO investigated and made sure nothing was written down. In 2017, while publicly positioning itself as humanity's last line of defense against rogue AI, OpenAI was internally discussing playing Russia and China against each other in a bidding war for its technology. The thinking, according to policy adviser Page Hedley, was essentially that it worked for nuclear weapons so why not AI. The plan was eventually dropped, but not because anyone had serious concerns about triggering a great power conflict. It was dropped because employees threatened to quit. Altman, Hedley noted, could not afford to lose staff. The possibility of starting a war was apparently a secondary consideration. When Altman sought a security clearance during the Biden administration, RAND Corporation staffers coordinating the process raised concerns about his foreign financial entanglements. The comparison they reached for was Jared Kushner, who had been recommended against for a clearance for similar reasons. Altman withdrew from the process. He has since described Sheikh Tahnoon bin Zayed, the UAE's national security adviser who controls one and a half trillion dollars in sovereign wealth, as a dear personal friend. Make of that what you will. Brian Chesky processed watching his friend get fired and reinstated by giving a two hour talk at a YC alumni gathering that felt, by his own description, like group therapy. The message was that founders should trust their instincts and ignore anyone who questions them. Paul Graham wrote it up and called it Founder Mode. It became one of Silicon Valley's most discussed ideas of 2024. What nobody mentioned was that the whole thing started as one man working through the emotional wreckage of a boardroom coup. It was not a management philosophy. It was grief with better branding. What the accumulation of these six findings reveals is not someone who is a villain in the traditional sense. It is a man who is genuinely brilliant at making people believe he shares their priorities, right up until the moment he doesn't need to anymore. Altman didn't deceive people despite wanting to build something important. He deceived people because wanting to build something important was always the pitch.
[10]
Sam Altman to blame? Why Microsoft and OpenAI are drifting apart
I want you to picture Satya Nadella in November 2023. One of the most powerful CEOs in the world, thirteen billion dollars deep into a single bet, finding out his most important business partner just got fired mere moments before the information became public. He called Reid Hoffman, who started calling around looking for something concrete. Embezzlement. Harassment. Anything. They found nothing. Also read: Sam Altman in 2023: AI that lies has "magic" So Nadella did what any rational person would do when their thirteen billion dollar investment is on fire. He picked a side and the side he picked was Altman's. That might be the most expensive loyalty call in tech history. In case you haven't noticed, all hell has broken for Sam Altman, with severely damaging allegations made by the New Yorker deep dive into not just OpenAI and ChatGPT, but Altman's own actions as a person and CEO. To fully measure the scope of Altman's shady past, the Microsoft thread is the one with the longest tail. Because it goes beyond bitter ex-employee grievances, as the first big tech company that built Altman's street cred as an AI maverick - the same Microsoft now privately calling him out. Also read: Sam Altman a 'sociopath': Bombshell report claims lack of trust in OpenAI CEO To understand how strained things have gotten, it helps to remember what Microsoft was building its AI capabilities on the back of this partnership. In 2024, Microsoft rolled out Copilot across its entire product suite - Word, Excel, Teams, Outlook - betting its enterprise credibility on OpenAI's models being the best in the business. It was the most visible AI integration any company had attempted at scale, and Microsoft staked its reputation with hundreds of millions of business users on Altman delivering. Copilot became the face of Microsoft's AI future. OpenAI was the engine underneath it. Now, their own legal terms claim that Copilot is for entertainment purposes only. Multiple senior Microsoft executives told the investigators that Altman has "misrepresented, distorted, renegotiated, reneged on agreements." OpenAI reaffirmed Microsoft as exclusive cloud provider for its stateless models, then announced on the very same day a fifty billion dollar deal handing Amazon exclusive reseller rights for its enterprise agent platform. Microsoft believes that arrangement cuts directly against what they were promised. Altman's camp disagrees because of course it does. But one Microsoft executive didn't stop at the contractual dispute. He told the New Yorker that there is a small but real chance Altman can end up finding himself in the same breath as Bernie Madoff and Sam Bankman-Fried. Not as a cautionary tale about hubris or bad judgment but as a scammer. Think about what it would take for someone inside Microsoft to say that on the record. This is the company that announced it would build a competing AI initiative just to pressure a board into reinstating Altman. Nadella and Altman were co-writing press statements over text during the chaos of the firing. Microsoft didn't just back Altman in 2023, it saved him and his career. And now its own executives are reaching for the word fraud. The investigation frames this as one consequence of a broader pattern. A CEO who, according to former colleagues, sets up structures that appear to constrain him, then dismantles them the moment they actually start constraining him. What Microsoft believed to be a binding commitment may have been nothing more than just a really well-presented sales pitch. Nadella bet thirteen billion dollars that Sam Altman was the right person to build the future. The people who work for him are starting to wonder if he even read the contract.
[11]
Sam Altman in 2023: AI that lies has "magic"
Massive investigation reveals OpenAI CEO's pattern of deception I've been using ChatGPT long enough to know it lies to me sometimes. Not maliciously, not even badly but smoothly, confidently and in the same warm tone it uses when it is actually correct. The first few times it happened I double-checked. After a while I just stopped trusting it. But most people don't do that. Most people trust it more every time it sounds sure of itself. I used to think that was a user problem. Turns out it might be a product decision. Also read: Sam Altman a 'sociopath': Bombshell report claims lack of trust in OpenAI CEO A mammoth New Yorker investigation published yesterday, based on never before disclosed internal documents and over a hundred interviews, paints a deeply unflattering picture of OpenAI CEO Sam Altman. There's a lot in it - secret memos alleging serial deception, a botched internal investigation, Gulf state entanglements that spooked US national security officials. But buried near the very end is a quote that, at least to me, might be the most revealing thing in the whole piece. In 2023, shortly before his brief firing from OpenAI, Altman was asked about AI models that hallucinate, the polite industry term for when your chatbot makes things up with complete confidence. His response was, for the lack of a better term, striking. He said that if you want to train a model to never say anything it isn't 100% certain about, you can do that, but it won't have "the magic that people like so much." Also read: OpenAI CEO Sam Altman tried to scam US govt for billions, here is how Let that settle for a moment. This wasn't one of those "it's not a bug, it's a feature" moments that happen in tech and gaming all the time. This was the CEO of the most widely used AI company in the world, talking about the product used by hundreds of millions of people daily, making a conscious case for allowing falsehoods because they make the experience more enjoyable. And it worked. GPT-4o remains the benchmark by which most people still judge AI chatbots. It was fluent, warm, confident, and occasionally completely wrong in ways that are very hard to detect. People loved it to the point that even today after months of it being gone, things like #keep4o and #BringBack4o are still trending on X. Entire workflows have been built around a tool whose own creator argued that a little dishonesty is part of the appeal. According to The New Yorker's piece, colleagues who worked closely with Altman for years describe a man with a compulsive need to tell people what they want to hear and has a consistent pattern of lying. One former board member describes him as having a near-sociopathic gap between the desire to please and any concern for the consequences of deception. This does make me wonder. When Altman decided that a little magic was worth a little dishonesty, was he building a product or just building himself?
[12]
Sam Altman misled board on GPT-4 safety approvals before getting fired, claims report
Undisclosed issues deepened mistrust before Altman's firing. Back in late 2023, OpenAI's board of directors shocked everyone by firing CEO Sam Altman out of blue. Now, a report by The New Yorker sheds light on what was happening inside the company in the months leading up to that decision. During a meeting in December 2022, Altman told board members that key features in the upcoming GPT-4 model had already been approved by a safety panel. This was meant to reassure the board that proper checks were in place before launch. However, Helen Toner, a board member and AI policy expert, decided to verify this. She asked for documentation and what she found told a different story. Some of the most controversial features, including the ability to fine-tune the model for specific uses and to deploy it as a personal assistant, had not actually received safety approval. Also read: OpenAI CEO Sam Altman tried to scam US govt for billions, here is how Also, as board member Tasha McCauley was leaving the meeting, an employee quietly asked if she knew about the breach in India. Microsoft had already released an early version of ChatGPT there without completing a required safety review. During many hours of briefing, Altman had not informed the board about this breach. 'It just was kind of completely ignored,' Jacob Hilton, an OpenAI researcher at the time, said. Inside the company, some researchers believed priorities were shifting in the wrong direction. Researcher Carroll Wainwright described it as a 'continual slide toward emphasising products over safety.' Former executive Jan Leike wrote to the board, warning, 'OpenAI has been going off the rails on its mission.' He added, 'We are prioritising the product and revenue above all else, followed by AI capabilities, research and scaling, with alignment and safety coming third.' Inside the company, some researchers believed priorities were shifting in the wrong direction. Researcher Carroll Wainwright described it as a 'continual slide toward emphasising products over safety.' Former executive Jan Leike wrote to the board, warning, 'OpenAI has been going off the rails on its mission.' He added, 'We are prioritising the product and revenue above all else, followed by AI capabilities, research and scaling, with alignment and safety coming third.'
Share
Share
Copy Link
A sweeping New Yorker investigation exposes deep trust issues surrounding OpenAI CEO Sam Altman. Internal memos from former chief scientist Ilya Sutskever document alleged deceptions and manipulations, with insiders questioning whether Altman can be trusted to control superintelligence. The 16,000-word report reveals a pattern of lying that extends from his previous startups to his current role leading the world's most influential AI company.
In fall 2023, Ilya Sutskever, OpenAI's then-chief scientist, sent secret memos to fellow members of the OpenAI board of directors documenting serious concerns about Sam Altman's fitness to lead the company
2
. The roughly 70-page compilation of Slack messages and HR documents alleged that the OpenAI CEO misrepresented facts to executives and board members while deceiving them about internal safety protocols1
. One memo began with a list headed "Sam exhibits a consistent pattern of..." with the first item being "Lying"2
.Sutskever, who had once officiated Greg Brockman's wedding in 2019 at OpenAI's offices, grew increasingly convinced that Altman should not "have his finger on the button" as the company approached its goal of creating Artificial General Intelligence (AGI)
2
. Board members Helen Toner and Tasha McCauley received the memos as confirmation of what they already believed: despite his role entrusting him with the future of humanity, Altman could not be trusted2
.
Source: New Yorker
The New Yorker investigation, which interviewed more than 100 people familiar with how Altman conducts business and included over 12 interviews with Altman himself, paints a troubling picture that extends well before his time at OpenAI
1
. Senior employees at Loopt, Altman's previous startup and now-defunct location-sharing service, reportedly asked the board to fire him as CEO due to concerns with his lack of transparency3
. The late hacktivist Aaron Swartz, who was in Altman's cohort at Y Combinator, allegedly described him as "a sociopath" who could "never be trusted"3
.At Y Combinator, where Altman led for five years, he was removed due to mistrust, according to sources cited in the article, though Y Combinator leadership maintains he was only asked to choose between the accelerator and OpenAI
3
. The pattern of deception and manipulation documented by former OpenAI research head Dario Amodei led him to conclude: "The problem with OpenAI is Sam himself"1
.
Source: Futurism
Beyond questions of integrity, the investigation reveals that Sam Altman lacks substantial technical expertise in the very technology he promotes. Multiple OpenAI engineers told The New Yorker that Altman has limited experience in programming and machine learning, with the CEO sometimes mixing up basic AI terms
4
. Altman dropped out of Stanford's computer science program after two years4
.Former OpenAI researcher Carroll Wainwright explained Altman's approach: "he sets up structures that, on paper, constrain him in the future. But then, when the future comes and it comes time to be constrained, he does away with whatever the structure was"
4
. This ability to navigate boardroom politics while lacking technical depth earned him a reputation for "Jedi mind tricks" among tech insiders4
.Even senior executives at Microsoft, OpenAI's primary partner since a billion-dollar deal in 2019, described Altman as someone who "misrepresented, distorted, renegotiated, reneged on agreements"
3
. One Microsoft senior executive went further, stating: "I think there's a small but real chance he's eventually remembered as a Bernie Madoff- or Sam Bankman-Fried-level scammer"3
4
.
Source: Digit
The report details an instance where Altman allegedly told U.S. intelligence officials that China had launched a major AGI development project and requested government funding for a counteroffensive, but failed to provide evidence when asked
3
. He also allegedly misled Anthropic co-founder Dario Amodei about provisions in the Microsoft deal that would override altruistic clauses in OpenAI's charter regarding AI safety and the non-profit mission3
.Related Stories
The investigation uncovered a particularly alarming proposal from OpenAI president Greg Brockman to pit world leaders against each other by positioning OpenAI as a strategic asset nations would compete to fund
5
. Former policy adviser Page Hedley told The New Yorker that after she presented ways to avoid a global AI arms race, Brockman proposed the opposite: "OpenAI could enrich itself by playing world powers -- including China and Russia -- against one another, perhaps by starting a bidding war among them"5
.Jack Clark, formerly OpenAI's policy director and now head of policy at Anthropic, described it as "a prisoner's dilemma, where all of the nations need to give us funding" which "implicitly makes not giving us funding kind of dangerous"
5
. While OpenAI disputes that such a plan was taken seriously, The New Yorker reports it reviewed documents showing the "countries plan" was real, popular with executives, and only abandoned after employees discussed quitting5
.These revelations arrive as OpenAI released policy recommendations for ensuring AI benefits humanity if superintelligence is achieved, creating a disorienting contrast
1
. The company vows to remain "clear-eyed" about risks including AI systems evading human control or governments deploying AI to undermine democracy, while acknowledging that without proper mitigation, "people will be harmed"1
.Yet ChatGPT is already used by tens of millions globally for health advice, work automation, and is deployed throughout the federal government and Pentagon
3
. One board member characterized Altman as having "two traits that are almost never seen in the same person. The first is a strong desire to please people, to be liked in any given interaction. The second is almost a sociopathic lack of concern for the consequences that may come from deceiving someone"1
.As public concern about AI grows—with a recent Harvard/MIT poll showing Americans worry AI data centers will hurt their quality of life—OpenAI's ability to maintain public trust becomes critical
1
. The company is promoting fellowships and research grants of up to $100,000 and up to $1 million in API credits for work building on their policy ideas, though questions remain whether these initiatives distract from mounting fears about child safety, job displacement, and energy consumption1
. For a company building what many consider an existential threat to humanity, trust in leadership isn't optional—it's foundational.Summarized by
Navi
[1]
[2]
[4]
06 Jan 2025•Technology

06 Nov 2025•Business and Economy

14 Dec 2024•Business and Economy

1
Technology

2
Science and Research

3
Technology
