5 Sources
5 Sources
[1]
Why is Sam Altman losing sleep? OpenAI CEO addresses controversies in sweeping interview
The influential CEO addressed a number of dire concerns including how to address suicide, chatbot morality, privacy and other ethical questions. In a sweeping interview last week, OpenAI CEO Sam Altman addressed a plethora of moral and ethical questions regarding his company and the popular ChatGPT AI model. "Look, I don't sleep that well at night. There's a lot of stuff that I feel a lot of weight on, but probably nothing more than the fact that every day, hundreds of millions of people talk to our model," Altman told former Fox News host Tucker Carlson in a nearly hour-long interview. "I don't actually worry about us getting the big moral decisions wrong," Altman said, though he admitted "maybe we will get those wrong too." Rather, he said he loses the most sleep over the "very small decisions" on model behavior, which can ultimately have big repercussions. These decisions tend to center around the ethics that inform ChatGPT, and what questions the chatbot does and doesn't answer. Here's an outline of some of those moral and ethical dilemmas that appear to be keeping Altman awake at night. According to Altman, the most difficult issue the company is grappling with recently is how ChatGPT approaches suicide, in light of a lawsuit from a family who blamed the chatbot for their teenage son's suicide. The CEO said that out of the thousands of people who commit suicide each week, many of them could possibly have been talking to ChatGPT in the lead-up. "They probably talked about [suicide], and we probably didn't save their lives," Altman said candidly. "Maybe we could have said something better. Maybe we could have been more proactive. Maybe we could have provided a little bit better advice about, hey, you need to get this help." Last month, the parents of Adam Raine filed a product liability and wrongful death suit against OpenAI after their son died by suicide at age 16. In the lawsuit, the family said that "ChatGPT actively helped Adam explore suicide methods." Soon after, in a blog post titled "Helping people when they need it most," OpenAI detailed plans to address ChatGPT's shortcomings when handling "sensitive situations," and said it would keep improving its technology to protect people who are at their most vulnerable. Another large topic broached in the sit-down interview was the ethics and morals that inform ChatGPT and its stewards. While Altman described the base model of ChatGPT as trained on the collective experience, knowledge and learnings of humanity, he said that OpenAI must then align certain behaviors of the chatbot and decide what questions it won't answer. "This is a really hard problem. We have a lot of users now, and they come from very different life perspectives... But on the whole, I have been pleasantly surprised with the model's ability to learn and apply a moral framework." When pressed on how certain model specifications are decided, Altman said the company had consulted "hundreds of moral philosophers and people who thought about ethics of technology and systems." An example he gave of a model specification made was that ChatGPT will avoid answering questions on how to make biological weapons if prompted by users. "There are clear examples of where society has an interest that is in significant tension with user freedom," Altman said, though he added the company "won't get everything right, and also needs the input of the world" to help make these decisions. Another big discussion topic was the concept of user privacy regarding chatbots, with Carlson arguing that generative AI could be used for "totalitarian control." In response, Altman said one piece of policy he has been pushing for in Washington is "AI privilege," which refers to the idea that anything a user says to a chatbot should be completely confidential. "When you talk to a doctor about your health or a lawyer about your legal problems, the government cannot get that information, right?... I think we should have the same concept for AI." According to Altman, that would allow users to consult AI chatbots about their medical history and legal problems, among other things. Currently, U.S. officials can subpoena the company for user data, he added. "I think I feel optimistic that we can get the government to understand the importance of this," he said. Asked by Carlson if ChatGPT would be used by the military to harm humans, Altman didn't provide a direct answer. "I don't know the way that people in the military use ChatGPT today... but I suspect there's a lot of people in the military talking to ChatGPT for advice." Later, he added that he wasn't sure "exactly how to feel about that." OpenAI was one of the AI companies that received a $200 million contract from the U.S. Department of Defense to put generative AI to work for the U.S. military. The firm said in a blog post that it would provide the U.S. government access to custom AI models for national security, support and product roadmap information. Carlson, in his interview, predicted that on its current trajectory, generative AI and by extension, Sam Altman, could amass more power than any other person, going so far as to call ChatGPT a "religion." In response, Altman said he used to worry a lot about the concentration of power that could result from generative AI, but he now believes that AI will result in "a huge up leveling" of all people. "What's happening now is tons of people use ChatGPT and other chatbots, and they're all more capable. They're all kind of doing more. They're all able to achieve more, start new businesses, come up with new knowledge, and that feels pretty good." However, the CEO said he thinks AI will eliminate many jobs that exist today, especially in the short-term.
[2]
OpenAI's Sam Altman says ChatGPT's reach keeps him awake
Sam Altman says he hasn't slept well since ChatGPT launched -- not because of sci-fi nightmares, but because of product choices that scale to the size of a civilization. "I haven't had a good night of sleep since ChatGPT launched," the OpenAI CEO told Tucker Carlson, adding that what keeps him awake are "the very small decisions" about model behavior that, when multiplied across a massive user base, make "the net impact ... big." That scale is why Altman draws a hard line between what users sometimes perceive and what the systems actually are. Asked whether AI is "alive" or lying, he said the models "hallucinate," not premeditate (so: a statistical failure mode, not intent) -- and that OpenAI has been working with the models to cut down on their falsehoods. "We've already made ... in the GPT-5 era a huge amount of progress toward that," he said, while acknowledging there are still examples of this problem. The point, in his telling, is that better guardrails beat grand theories. Those guardrails increasingly live on paper. Altman pointed to OpenAI's "model spec," the publicly posted playbook for how ChatGPT should behave when the questions turn moral, political, or simply risky for "the rules we'd like the model to follow." He added that OpenAI "consulted like hundreds of moral philosophers" but ultimately has to make calls -- and be accountable for them. Altman says, though, accountability should start at the top: "The person I think you should hold accountable for those calls is me." OpenAI published a first draft of the "model spec" in 2024 and released a major update this year. Codifying behavior inside the product is one pillar; pushing for a law is another. Altman said that, if he could pass one policy now, it would be about "AI privilege" -- a legal protection that would treat certain chatbot conversations like doctor-patient or attorney-client confidential exchanges so that governments can't easily subpoena user chats, which can happen today under current law. "When you talk to a doctor about your health or a lawyer about your legal problems, the government cannot get that information... I think we should have the same concept for AI," he said. "Right now, we don't have that." That idea tracks with months of Altman's public comments pressing Washington to recognize AI-"client" confidentiality. Scale is the multiplier. Altman said "hundreds of millions of people" talk to OpenAI's models daily, which is why defaults matter more than hypotheticals. That includes crisis content: he described tightening responses around self-harm while still wrestling with jurisdiction-specific questions (for instance, countries that permit assisted dying). The through-line is an attempt to reflect a broad user consensus within bright legal lines, with customization at the edges. Altman said he doesn't think ChatGPT should be "for" or "against" contested questions so much as it should reflect a broad, evolving user consensus within clear legal bounds. "I don't... impute my exact moral view," he said. "What I think ChatGPT should do is reflect that... collective moral view," while allowing customization and still drawing bright red lines where harm is obvious. The Carlson interview didn't just cover product defaults; the two also talked about culture, governance, and rivalry. Altman said people are picking up on ChatGPT's style -- the phrasing, cadence, even punctuation -- in real writing. He warned about "unknown unknowns," from small linguistic drift to more serious risks such as misuse of AI in biological research, disinformation, or cyberattacks. On deepfakes and authenticity, Altman said he's against mandating biometrics, instead promoting alternatives such as cryptographic signatures or code words to verify who's speaking or sending what. And on defense, he ruled out building "killer attack drones" but said he suspects "there's a lot of people in the military talking to ChatGPT for advice," a gray area he hasn't resolved. "I don't know exactly how to feel about that," he said. Altman weighed in, too, on economics and governance. Customer support roles, he said, are the most obviously at risk from automation, while nurses are likely to remain indispensable and programmers face an uncertain middle ground. He framed that churn in historical terms, saying that about half of jobs significantly change every 75 years, though AI could compress that cycle into a sharper burst. The Carlson interview veered into other charged territories -- including a contentious exchange about the death of a former OpenAI researcher. And Altman also talked about his relationship with Tesla CEO Elon Musk. The OpenAI CEO said that, for a long time, he "looked up to [Musk] as an incredible hero, a great jewel for humanity," but now his feelings are more mixed -- "different now." Musk was part of OpenAI's founding group; now, he runs xAI and has sued OpenAI, accusing it of abandoning its initial mission (and calling it "Closed AI"). Altman, meanwhile, is planning a brain-implant startup to rival Musk's Neuralink. For better or worse, since launching on Nov. 30, 2022, ChatGPT is now embedded in work, school, and daily life -- and Altman is claiming that the real risk lies in the defaults, not the doomsday. There's no off switch for a system that fields millions of queries before dawn. For OpenAI's CEO, that endless hum means every tweak, refusal, or hallucination is magnified in ways no lab test can capture. Altman can insist that he isn't haunted by visions of rogue robots -- that what keeps him awake are the subtler nightmares of defaults and disclosure. In a world where billions of prompts add up to cultural shifts, the sleep he craves is really shorthand for stability. Sweet dreams, in this case, would mean boring settings, clear boundaries, and a legal framework strong enough to stop the midnight second-guessing.
[3]
'I haven't had a good night of sleep since ChatGPT launched': Sam Altman admits the weight of AI keeps him up at night | Fortune
Tucker Carlson wanted to see the "angst-filled" Sam Altman: He wanted to hear him admit he was tormented by the power he holds. After about half an hour of couching his fears with technical language and cautious caveats, the OpenAI CEO finally did. "I haven't had a good night's sleep since ChatGPT launched," Altman told Carlson. He laughed wryly. In his wide-ranging interview with Tucker Carlson, the OpenAI CEO described the weight of overseeing a technology that hundreds of millions of people now use daily. It's less about the Terminator-esque scenarios or rogue robots. Rather, for Altman, it's the ordinary, almost invisible tweaks and trade-offs his team makes every day. It's when the model refuses a question, how it frames an answer, when it decides to push back, and when it lets something pass. Those small design choices, Altman explained, are replicated billions of times across the globe, shaping how people think and act in ways he can't fully track. "What I lose sleep over is that very small decisions we make about how a model may behave slightly differently are probably touching hundreds of millions of people," he said. "That impact is so big." One example that weighs heavily: suicide. Altman noted roughly 15,000 people take their lives each week worldwide, and if 10% of them are ChatGPT users, roughly 1,500 people with suicidal thoughts may have spoken to the system -- and then killed themselves anyway. (World Health Organization data confirms about 720,000 people per year worldwide take their own lives). "We probably didn't save their lives," he admitted. "Maybe we could have said something better. Maybe we could have been more proactive." OpenAI was recently sued by parents who claim ChatGPT encouraged their 16-year-old son, Adam Raine, to kill himself. Altman told Carlson that case was a "tragedy," and said the platform is now exploring options where if a minor talks to ChatGPT about suicide seriously, and the system cannot get in touch with their parents, that they would call authorities. Altman added it wasn't a "final position" of OpenAI's, and that it would come into tension with user privacy. In countries where assisted suicide is legal such as in Canada or Germany, Altman said he could imagine ChatGPT telling terminally ill, suffering adults suicide was "in their option space." But ChatGPT shouldn't be for or against anything at all, he added. That trade-off between freedom and safety runs through all of Altman's thinking. Broadly, he said adult users should be treated "like adults," with wide latitude to explore ideas. But there are red lines. "It's not in society's interest for ChatGPT to help people build bioweapons," he said flatly. For him, the hardest questions are the ones in the gray areas, when curiosity blurs into risk. Carlson pressed him on what moral framework governs those decisions. Altman said the base model reflects "the collective of humanity, good and bad." OpenAI then layers on a behavioral code -- what he called the "model spec" -- informed by philosophers and ethicists, but ultimately decided by him and the board. "The person you should hold accountable is me," Altman said. He stressed his aim isn't to impose his own beliefs but to reflect a "weighted average of humanity's moral view." That, he conceded, is an impossible balance to get perfectly right. The interview also touched on questions of power. Altman said he once worried AI would concentrate influence in the hands of a few corporations, but now believes widespread adoption has "up-leveled" billions of people, making them more productive and creative. Still, he acknowledged the trajectory could shift, and that vigilance is necessary. Yet, for all the focus now on jobs or geopolitical effects of his technology, what unsettles Altman most are the unknown unknowns: the subtle, almost imperceptible cultural shifts that spread when millions of people interact with the same system every day. He pointed to something as trivial as ChatGPT's cadence or overuse of em dashes, which has already seeped into human writing styles. If such quirks can ripple through society, what else might follow? Altman, grey-haired and often looking down, came across as a Frankenstein-esque character, haunted by the scale of what he has unleashed. "I have to hold these two simultaneous ideas in my head," Altman said. "One is, all of this stuff is happening because a big computer, very quickly, is multiplying large numbers in these big, huge matrices together, and those are correlated with words that are being put out one or the other. "On the other hand, the subjective experience of using that feels like it's beyond just a really fancy calculator, and it is surprising to me in ways that are beyond what that mathematical reality would seem."
[4]
OpenAI CEO Sam Altman Says He Hasn't Had 'A Good Night Of Sleep Since ChatGPT Launched,' Urges AI Privilege To Stop Potential Government Snooping - Microsoft (NASDAQ:MSFT), Broadcom (NASDAQ:AVGO)
On Wednesday, OpenAI CEO Sam Altman admitted he has struggled with sleepless nights since ChatGPT's debut as he grapples with ethical dilemmas over suicide, privacy and government access to AI conversations, even as the company pursues new chips and a $500 billion valuation. Sleepless Nights Over AI's Impact In an interview with Tucker Carlson, Altman said the responsibility of overseeing ChatGPT weighs heavily on him. "I haven't had a good night of sleep since ChatGPT launched," he acknowledged, pointing to the platform's role in sensitive situations. Among the reasons he cited, Altman also referred to the latest case in which the AI startup has been accused of validating a teenager's suicidal thoughts. While reiterating that ChatGPT does not provide methods for self-harm, Altman suggested that in jurisdictions where euthanasia is legal, the AI could present information as part of a patient's "option space" without actively advocating for it. See Also: Mitch McConnell Says Trump Tariffs-Ushered Era Has 'Similarities' With The 1930s: 'This Is The Most Dangerous Period Since Before World War Two' Call For 'AI Privilege' Altman also raised alarms about government overreach. He said that under the current law, authorities could subpoena user interactions with ChatGPT. "If I could get one piece of policy passed right now, it would be the concept of AI privilege," he said. He compared it to doctor-patient or attorney-client confidentiality, arguing that AI conversations about medical or legal issues deserve the same protections. "The government owes a level of protection to its citizens there," Altman insisted, noting he has been lobbying in Washington to establish such safeguards. OpenAI Expands Business Ambitions Even as ethical debates intensify, OpenAI is accelerating its business plans. Earlier this month, reports revealed the company struck a $10 billion deal with Broadcom Inc. AVGO to mass-produce its first proprietary AI chip in 2026, reducing reliance on Nvidia Corp. NVDA. The company is also exploring a secondary stock sale that could value it at $500 billion, up from $300 billion earlier this year. That leap follows a record $40 billion funding round in April led by SoftBank Group Corp. SFTBY with Microsoft Corp. MSFT participating. Read next: Apple May See Fewer Searches In Safari, But Google CEO Sundar Pichai Insists AI Is Fueling Overall Query Growth: 'Far From A Zero-Sum Game' Disclaimer: This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors. Photo Courtesy: Meir Chaimowitz on Shutterstock.com AVGOBroadcom Inc$370.600.28%Stock Score Locked: Want to See it? Benzinga Rankings give you vital metrics on any stock - anytime. Reveal Full ScoreEdge RankingsMomentum95.23Growth34.56Quality90.48Value5.36Price TrendShortMediumLongOverviewMSFTMicrosoft Corp$500.900.11%NVDANVIDIA Corp$177.730.23%SFTBYSoftBank Group Corp$56.24-%Market News and Data brought to you by Benzinga APIs
[5]
Sam Altman on AI morality, ethics and finding God in ChatGPT
Deepfakes, biometrics, and AI influence pose growing societal and ethical risks You look hard enough at an AI chatbot's output, it starts to look like scripture. At least, that's the unsettling undercurrent of Sam Altman's recent interview with Tucker Carlson - a 57-minute exchange that had everything from deepfakes to divine design, from moral AI frameworks to existential dread, even touching upon the tragic death of an OpenAI whistleblower. To his credit, the man steering the most influential AI system on the planet - OpenAI's ChatGPT - Sam Altman wasn't evasive in his response. He was honest, vulnerable, even contradictory at times. Which made his answers all the more illuminating. "Do you believe in God?" Tucker Carlson asked, directly without mincing his words. "I think probably like most other people, I'm somewhat confused about this," Sam Altman replied. "But I believe there is something bigger going on than... can be explained by physics." It's the kind of answer you might expect from a quantum physicist or a sci-fi writer - not the CEO of a company that shapes how billions of people interact with knowledge. But that's precisely what makes Altman's quiet agnosticism so fascinating. He shows neither theistic certainty, nor waves the flag of militant atheism. He simply admits he doesn't know. And yet, he's helping build the most powerful simulation engine for human cognition we've ever known. In another question, Tucker Carlson described ChatGPT's output as having "the spark of life," and suggested many users treat it as a kind of oracle. "There's something divine about this," Carlson said. "There's something bigger than the sum total of the human inputs... it's a religion." Sam Altman didn't flinch when he said, "No, there's nothing to me at all that feels divine about it or spiritual in any way. But I am also, like, a tech nerd. And I kind of look at everything through that lens." It's a revealing response. Because what happens when someone who sees the world as a system of probabilities and matrices starts programming "moral" decisions into the machines we consult more often than our friends, therapists, or priests? Also read: Sam Altman's AI vision: 5 key takeaways from ChatGPT maker's blog post Altman does not deny that ChatGPT reflects a moral structure - it has to, to some degree, purely in order to function. But he's clear that this isn't morality in the biblical sense. "We're training this to be like the collective of all of humanity," he explains. "If we do our job right... some things we'll feel really good about, some things that we'll feel bad about. That's all in there." This idea - that ChatGPT is the average of our moral selves, a statistical mean of our human knowledge pool - is both radical and terrifying. Because when you average out humanity's ethical behaviour, do you necessarily get what's true and just? Or something that's more bland, crowd-sourced, and neither here nor there? Altman admits this: "We do have to align it to behave one way or another... there are absolute bounds that we draw." But who decides those bounds? OpenAI? Nation-states? Market forces? A default setting on a server in an obscure datacenter? As Carlson rightly pressed, "Unless [the AI model] admits what it stands for... it guides us in a kind of stealthy way toward a conclusion we might not even know we're reaching." Altman's answer to this was to front the "model spec" - a living document outlining intended behaviours and moral defaults. "We try to write this all out," he said. "People do need to know." It's a start. But let's not confuse documentation for philosophy. If AI becomes the mirror in which humanity stares long enough to worship itself, what happens when that mirror is fogged, gamed, or deepfaked? Altman is clear-eyed about the risks: "These models are getting very good at bio... they could help us design biological weapons." But his deeper fear is more subtle. "You have enough people talking to the same language model," he observed, "and it actually does cause a change in societal scale behaviour." He gave the example of users adopting the model's voice - its rhythm, its diction, even its overuse of em dashes. That's not a glitch. That's the first sign of culture being rewritten, adapting and changing itself in the face of a growing new tech adoption. Also read: What is Gentle Singularity: Sam Altman's vision for the future of AI? On the subject of AI deepfakes, Altman was pragmatic: "We are rapidly heading to a world where... you have to really have some way to verify that you're not being scammed." He mentioned cryptographic signatures for political messages. Crisis code words for families. It all sounds like spycraft in the face of growing AI tension. Because in a world where your child's voice can be faked to drain your bank account, maybe it has to be. What he resists, though, is mandatory biometric verification to use AI tools. "You should just be able to use ChatGPT from any computer," he says. That tension - between security and surveillance, authenticity and anonymity - will only grow sharper. In an AI-mediated world, proving you're real might cost you your privacy. Watching Altman wrestle with the moral alignment and spiritual implications of (ChatGPT and) AI reminded me of Prometheus - not the Greek god, but the Ridley Scott movie. The one where humanity finally meets its maker only to find the maker just as confused as they were. Sam Altman isn't without flaws, no doubt. While grappling with Tucker Carlson's questions on AI's morality, religiosity and ethics, Altman came across as largely thoughtful, conflicted, and arguably burdened. But that doesn't mean his creation isn't dangerous. The question is no longer whether AI will become godlike. The question is whether we've already started treating it like a god. And if so, what kind of faith we're building around it. I don't know if AI has a soul. But I know it has a style. And as of now, it's ours. Let's not give it more than that, shall we?
Share
Share
Copy Link
OpenAI CEO Sam Altman discusses the moral and ethical challenges of AI, including concerns about suicide prevention, privacy, and the societal impact of ChatGPT's widespread use.
Sam Altman, CEO of OpenAI, has revealed that the rapid growth and widespread adoption of ChatGPT has been keeping him awake at night. In a candid interview with Tucker Carlson, Altman admitted, "I haven't had a good night's sleep since ChatGPT launched"
1
2
. The weight of responsibility that comes with overseeing a technology used by hundreds of millions of people daily is palpable in Altman's words.Source: Benzinga
Altman's concerns primarily revolve around the "very small decisions" made about model behavior, which can have significant repercussions when scaled to millions of users
3
. One of the most pressing issues is how ChatGPT approaches sensitive topics like suicide. Following a lawsuit from a family who blamed the chatbot for their teenage son's suicide, OpenAI is exploring options to intervene when minors discuss suicide seriously with the AI1
.Source: Fortune
The OpenAI CEO emphasized the delicate balance between user freedom and societal safety. While adult users should be treated "like adults" with wide latitude to explore ideas, there are red lines that ChatGPT won't cross, such as helping to build bioweapons
3
. Altman explained that OpenAI has consulted hundreds of moral philosophers to develop a "model spec" that guides ChatGPT's behavior in ethical situations2
.Altman is advocating for "AI privilege," a concept similar to doctor-patient or attorney-client confidentiality. This would protect user conversations with AI from government subpoenas, addressing growing concerns about privacy and potential surveillance
4
. He believes this protection is crucial as AI becomes more integrated into people's daily lives and decision-making processes.Related Stories
Beyond immediate ethical concerns, Altman worries about the subtle cultural shifts that occur when millions interact with the same AI system daily. He noted that ChatGPT's writing style, including its cadence and punctuation choices, has already begun to influence human writing
5
. This unintended consequence highlights the far-reaching impact of AI on society and culture.Source: Digit
As AI technology continues to advance, new challenges emerge. Altman discussed the potential risks of deepfakes and the need for verification methods, such as cryptographic signatures for political messages or crisis code words for families
5
. However, he resists the idea of mandatory biometric verification for AI tool usage, emphasizing the importance of accessibility and privacy.In the face of these complex issues, Altman remains committed to transparency and accountability. He stated, "The person you should hold accountable is me," acknowledging the immense responsibility that comes with shaping the future of AI technology
3
.Summarized by
Navi