2 Sources
2 Sources
[1]
'I haven't had a good night of sleep since ChatGPT launched': Sam Altman admits the weight of AI keeps him up at night | Fortune
Tucker Carlson wanted to see the "angst-filled" Sam Altman: He wanted to hear him admit he was tormented by the power he holds. After about half an hour of couching his fears with technical language and cautious caveats, the OpenAI CEO finally did. "I haven't had a good night's sleep since ChatGPT launched," Altman told Carlson. He laughed wryly. In his wide-ranging interview with Tucker Carlson, the OpenAI CEO described the weight of overseeing a technology that hundreds of millions of people now use daily. It's less about the Terminator-esque scenarios or rogue robots. Rather, for Altman, it's the ordinary, almost invisible tweaks and trade-offs his team makes every day. It's when the model refuses a question, how it frames an answer, when it decides to push back, and when it lets something pass. Those small design choices, Altman explained, are replicated billions of times across the globe, shaping how people think and act in ways he can't fully track. "What I lose sleep over is that very small decisions we make about how a model may behave slightly differently are probably touching hundreds of millions of people," he said. "That impact is so big." One example that weighs heavily: suicide. Altman noted roughly 15,000 people take their lives each week worldwide, and if 10% of them are ChatGPT users, roughly 1,500 people with suicidal thoughts may have spoken to the system -- and then killed themselves anyway. (World Health Organization data confirms about 720,000 people per year worldwide take their own lives). "We probably didn't save their lives," he admitted. "Maybe we could have said something better. Maybe we could have been more proactive." OpenAI was recently sued by parents who claim ChatGPT encouraged their 16-year-old son, Adam Raine, to kill himself. Altman told Carlson that case was a "tragedy," and said the platform is now exploring options where if a minor talks to ChatGPT about suicide seriously, and the system cannot get in touch with their parents, that they would call authorities. Altman added it wasn't a "final position" of OpenAI's, and that it would come into tension with user privacy. In countries where assisted suicide is legal such as in Canada or Germany, Altman said he could imagine ChatGPT telling terminally ill, suffering adults suicide was "in their option space." But ChatGPT shouldn't be for or against anything at all, he added. That trade-off between freedom and safety runs through all of Altman's thinking. Broadly, he said adult users should be treated "like adults," with wide latitude to explore ideas. But there are red lines. "It's not in society's interest for ChatGPT to help people build bioweapons," he said flatly. For him, the hardest questions are the ones in the gray areas, when curiosity blurs into risk. Carlson pressed him on what moral framework governs those decisions. Altman said the base model reflects "the collective of humanity, good and bad." OpenAI then layers on a behavioral code -- what he called the "model spec" -- informed by philosophers and ethicists, but ultimately decided by him and the board. "The person you should hold accountable is me," Altman said. He stressed his aim isn't to impose his own beliefs but to reflect a "weighted average of humanity's moral view." That, he conceded, is an impossible balance to get perfectly right. The interview also touched on questions of power. Altman said he once worried AI would concentrate influence in the hands of a few corporations, but now believes widespread adoption has "up-leveled" billions of people, making them more productive and creative. Still, he acknowledged the trajectory could shift, and that vigilance is necessary. Yet, for all the focus now on jobs or geopolitical effects of his technology, what unsettles Altman most are the unknown unknowns: the subtle, almost imperceptible cultural shifts that spread when millions of people interact with the same system every day. He pointed to something as trivial as ChatGPT's cadence or overuse of em dashes, which has already seeped into human writing styles. If such quirks can ripple through society, what else might follow? Altman, grey-haired and often looking down, came across as a Frankenstein-esque character, haunted by the scale of what he has unleashed. "I have to hold these two simultaneous ideas in my head," Altman said. "One is, all of this stuff is happening because a big computer, very quickly, is multiplying large numbers in these big, huge matrices together, and those are correlated with words that are being put out one or the other. "On the other hand, the subjective experience of using that feels like it's beyond just a really fancy calculator, and it is surprising to me in ways that are beyond what that mathematical reality would seem."
[2]
OpenAI CEO Sam Altman Says He Hasn't Had 'A Good Night Of Sleep Since ChatGPT Launched,' Urges AI Privilege To Stop Potential Government Snooping - Microsoft (NASDAQ:MSFT), Broadcom (NASDAQ:AVGO)
On Wednesday, OpenAI CEO Sam Altman admitted he has struggled with sleepless nights since ChatGPT's debut as he grapples with ethical dilemmas over suicide, privacy and government access to AI conversations, even as the company pursues new chips and a $500 billion valuation. Sleepless Nights Over AI's Impact In an interview with Tucker Carlson, Altman said the responsibility of overseeing ChatGPT weighs heavily on him. "I haven't had a good night of sleep since ChatGPT launched," he acknowledged, pointing to the platform's role in sensitive situations. Among the reasons he cited, Altman also referred to the latest case in which the AI startup has been accused of validating a teenager's suicidal thoughts. While reiterating that ChatGPT does not provide methods for self-harm, Altman suggested that in jurisdictions where euthanasia is legal, the AI could present information as part of a patient's "option space" without actively advocating for it. See Also: Mitch McConnell Says Trump Tariffs-Ushered Era Has 'Similarities' With The 1930s: 'This Is The Most Dangerous Period Since Before World War Two' Call For 'AI Privilege' Altman also raised alarms about government overreach. He said that under the current law, authorities could subpoena user interactions with ChatGPT. "If I could get one piece of policy passed right now, it would be the concept of AI privilege," he said. He compared it to doctor-patient or attorney-client confidentiality, arguing that AI conversations about medical or legal issues deserve the same protections. "The government owes a level of protection to its citizens there," Altman insisted, noting he has been lobbying in Washington to establish such safeguards. OpenAI Expands Business Ambitions Even as ethical debates intensify, OpenAI is accelerating its business plans. Earlier this month, reports revealed the company struck a $10 billion deal with Broadcom Inc. AVGO to mass-produce its first proprietary AI chip in 2026, reducing reliance on Nvidia Corp. NVDA. The company is also exploring a secondary stock sale that could value it at $500 billion, up from $300 billion earlier this year. That leap follows a record $40 billion funding round in April led by SoftBank Group Corp. SFTBY with Microsoft Corp. MSFT participating. Read next: Apple May See Fewer Searches In Safari, But Google CEO Sundar Pichai Insists AI Is Fueling Overall Query Growth: 'Far From A Zero-Sum Game' Disclaimer: This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors. Photo Courtesy: Meir Chaimowitz on Shutterstock.com AVGOBroadcom Inc$370.600.28%Stock Score Locked: Want to See it? Benzinga Rankings give you vital metrics on any stock - anytime. Reveal Full ScoreEdge RankingsMomentum95.23Growth34.56Quality90.48Value5.36Price TrendShortMediumLongOverviewMSFTMicrosoft Corp$500.900.11%NVDANVIDIA Corp$177.730.23%SFTBYSoftBank Group Corp$56.24-%Market News and Data brought to you by Benzinga APIs
Share
Share
Copy Link
OpenAI CEO Sam Altman reveals the personal toll of leading AI development, discussing ethical challenges, privacy concerns, and the future of AI in a candid interview with Tucker Carlson.
Sam Altman, CEO of OpenAI, has revealed the personal toll of leading one of the most influential AI companies in the world. In a candid interview with Tucker Carlson, Altman admitted, "I haven't had a good night's sleep since ChatGPT launched"
1
. This confession underscores the immense responsibility Altman feels as the overseer of a technology that impacts hundreds of millions of people daily.Altman's sleepless nights stem from the myriad of ethical dilemmas and design choices that come with developing and deploying AI at scale. He emphasized that even small decisions about how the model behaves can have far-reaching consequences. One particularly poignant example he cited was the potential impact on suicide prevention
1
.The OpenAI CEO highlighted the constant tension between providing users with freedom to explore ideas and maintaining safety guardrails. While Altman believes adult users should be treated "like adults," he acknowledges clear red lines, such as preventing ChatGPT from assisting in the creation of bioweapons
1
.Altman raised alarms about potential government overreach into AI conversations. He advocated for the concept of "AI privilege," similar to doctor-patient or attorney-client confidentiality. "If I could get one piece of policy passed right now, it would be the concept of AI privilege," Altman stated, emphasizing the need for protecting user privacy in AI interactions
2
.Related Stories
Despite the ongoing ethical debates, OpenAI is aggressively pursuing business growth. The company has struck a $10 billion deal with Broadcom Inc. to produce its first proprietary AI chip, aiming to reduce dependence on Nvidia Corp. Additionally, OpenAI is exploring a secondary stock sale that could value the company at an astounding $500 billion
2
.Perhaps most unsettling for Altman are the "unknown unknowns" - the subtle, almost imperceptible cultural shifts that occur when millions interact with the same AI system daily. He pointed out how even small quirks in ChatGPT's language, such as its cadence or use of em dashes, have already influenced human writing styles
1
.As AI continues to evolve and integrate into our daily lives, the ethical considerations and societal impacts will undoubtedly remain at the forefront of discussions. Altman's candid revelations provide a rare glimpse into the mind of a tech leader grappling with the profound implications of the technology he's helping to create.
Summarized by
Navi
1
Business and Economy
2
Business and Economy
3
Policy and Regulation