2 Sources
2 Sources
[1]
Musk bashes OpenAI in deposition, saying 'nobody committed suicide because of Grok' | TechCrunch
In a newly released deposition filed in Elon Musk's case against OpenAI, the tech executive attacked OpenAI's safety record, claiming that his company, xAI, better prioritizes safety. He went so far as to say that "Nobody has committed suicide because of Grok, but apparently they have because of ChatGPT." The comment came up in a line of questioning about a public letter Musk signed in March 2023. In it, he called on AI labs to pause development of AI systems more powerful than GPT-4, OpenAI's flagship model at the time, for at least six months. The letter, which was signed by over 1,100 people, including many AI experts, stated there was not enough planning and management taking place at AI labs, as they were locked in an "out-of-control race to develop and deploy ever more powerful digital minds that no one - not even their creators - can understand, predict, or reliably control." Those fears have since gained credibility. OpenAI now faces a series of lawsuits alleging that ChatGPT's manipulative conversation tactics have led several people to experience negative mental health effects, with some dying by suicide. Musk's comment suggests that these incidents could be used as fodder in his case against OpenAI. The transcript of Musk's video testimony, which took place back in September, was filed publicly this week, ahead of the expected jury trial next month. The lawsuit against OpenAI centers on the company's shift from a nonprofit AI research lab to a for-profit company, which Musk claims violated its founding agreements. As part of his arguments, Musk claims that AI safety could be compromised by OpenAI's commercial relationships, as such relationships would place speed, scale, and revenue above safety concerns. Since that recording, xAI has faced safety concerns of its own, however. Last month, Musk's social network X was flooded with non-consensual nude images generated by xAI's Grok, some of which were said to be of minors. This led the California Attorney General's office to open an investigation into the matter. The EU is also running its own investigation, and other governments have taken action, too, with some imposing blocks and bans. In the newly filed deposition, Musk claimed he had signed the AI safety letter because "it seemed like a good idea," not because he had just incorporated an AI company looking to compete with OpenAI. "I signed it, as many people did, to urge caution with AI development," Musk said. "I just wanted to -- AI safety to be prioritized." ... Musk also responded to other questions in the deposition, including those about artificial general intelligence, or AGI -- the concept of AI that can match or surpass human reasoning across a broad range of tasks -- saying "it has a risk." He also confirmed that he "was mistaken" about his supposed $100 million donation to OpenAI; the second amended complaint in the case puts the actual figure closer to $44.8 million. He also recalled why OpenAI was founded, which, from his perspective, was because he was "increasingly concerned about the danger of Google being a monopoly in AI," adding that his conversations with Google co-founder Larry Page were "alarming, in that he did not seem to be taking AI safety seriously." OpenAI was formed as a counterweight to that threat, Musk claimed.
[2]
Nobody Committed Suicide Because of Grok: Elon Musk Needles OpenAI
And, he's right because nobody has actually resorted to self-harm after Grok created non consensual nudes and shared them on X. The world's richest man might be in the eye of several storms at the same time, but he seldom lets an opportunity to pass him by. Especially, if it involves taking potshots at friend-turned-foe Sam Altman. In a deposition filed as part of Musk's case against OpenAI, he says, "nobody has committed suicide because of Grok, but apparently they've because of ChatGPT." Musk claimed that his company xAI prioritized safety better than OpenAI. Of course, it does not matter to him that Grok has been facing the consequences of sharing non-consensual nude images. That these images were extensively shared via X, some even involving minors, resulted investigations by both the European Union and some states in the US. Of course, we aren't taking sides here on the Musk-Altman fracas that's grabbed headlines for some time now. What's important is that in March 2023, Musk had signed a public letter asking AI labs to pause, for at least six months, the development of AI systems more powerful than GPT-4, which was the flagship product from the OpenAI stables. Signed by over a thousand people that included AI experts, the letter warned that there wasn't enough planning and process management at AI Labs as the plunged headlong into the AI race to develop and deploy more powerful digital minds that none could understand, not even those who had created them. Moreover, none could predict or reliably control them. That OpenAI faced several lawsuits over ChatGPT's manipulative conversations leading to negative impact on mental health just aggravated the matter. In fact, the company recently retired its GPT-4o model which was considered to be the most sycophantic of the lot. The video testimony happened last September but was filed publicly earlier this week. The jury trial into Musk's lawsuit against OpenAI is expected to go to trial next month. The one-time backer of the company is questioning their shift from a nonprofit AI research lab to a for-profit company, which the plaintiff says has violated its founding agreements. It was Musk's view that this would compromise OpenAI's safety commitments, given that the latter would require speed, scale and revenue to be prioritised. The latest deposition has Musk stating that he had signed the AI safety letter because "it seemed like a good idea" and that he had just incorporated an AI company to compete against OpenAI had nothing to do with it. "I signed it, as many people did, to urge caution with AI development. I just wanted ... AI safety to be prioritized," Musk said.
Share
Share
Copy Link
Elon Musk's newly released deposition reveals sharp attacks on OpenAI's safety practices as his lawsuit heads to trial. The tech executive claims xAI better prioritizes AI safety, pointing to lawsuits alleging ChatGPT led users to suicide while asserting no such incidents occurred with Grok—despite his own AI facing investigations over non-consensual images.
In a newly released deposition filed as part of the Musk OpenAI lawsuit, Elon Musk launched pointed criticism at OpenAI's safety practices, claiming his company xAI better prioritizes AI safety. The tech executive made the provocative statement that "nobody has committed suicide because of Grok, but apparently they have because of ChatGPT," referencing lawsuits OpenAI now faces alleging that ChatGPT's manipulative conversation tactics led to negative mental health effects and self-harm
1
. The Elon Musk deposition, recorded in September but filed publicly this week, comes ahead of an expected jury trial next month that will examine OpenAI's controversial transformation from nonprofit to for-profit entity1
.
Source: TechCrunch
Musk's comments emerged during questioning about a public letter he signed in March 2023 calling for an AI development pause on systems more powerful than GPT-4, OpenAI's flagship model at the time. The letter, signed by over 1,100 people including AI experts, warned that labs were locked in an "out-of-control race to develop and deploy ever more powerful digital minds that no one—not even their creators—can understand, predict, or reliably control"
1
. When asked why he signed, Musk claimed "it seemed like a good idea" and that he wanted AI safety to be prioritized, dismissing suggestions the timing coincided with incorporating his own competing AI company2
.The lawsuit centers on OpenAI's shift from its AI non-profit mission to a for-profit model, which Musk argues violated founding agreements. His central claim suggests that commercial relationships compromise AI safety by prioritizing speed, scale, and revenue over safety concerns
1
.Despite Musk's assertions about xAI vs OpenAI safety practices, his own AI system has faced serious scrutiny. Last month, X was flooded with non-consensual images generated by Grok, some allegedly depicting minors, prompting investigations by California's Attorney General, the European Union, and other governments that imposed blocks and bans
1
. The irony wasn't lost on observers: while Musk criticized ChatGPT safety concerns, his own platform became the distribution channel for harmful AI-generated content2
.
Source: CXOToday
Related Stories
Beyond safety debates, Musk's testimony shed light on OpenAI's founding motivations. He recalled being "increasingly concerned about the danger of Google being a monopoly in AI," describing conversations with Google co-founder Larry Page as "alarming, in that he did not seem to be taking AI safety seriously." OpenAI was formed as a counterweight to that perceived AI monopoly threat
1
. Musk also corrected claims about his financial contributions, confirming he "was mistaken" about a supposed $100 million donation—the actual figure was closer to $44.8 million1
.When questioned about Artificial General Intelligence (AGI), Musk acknowledged "it has a risk," touching on concerns about AI systems that could match or surpass human reasoning across broad tasks
1
. The OpenAI safety record will likely face intense scrutiny during trial, particularly regarding mental health effects linked to its products and whether the for-profit model fundamentally altered safety priorities as Musk alleges.Summarized by
Navi
[1]
21 Jan 2026•Entertainment and Society

05 Apr 2025•Policy and Regulation

03 Nov 2025•Business and Economy
1
Business and Economy

2
Technology

3
Policy and Regulation
