58 Sources
58 Sources
[1]
ChatGPT erotica coming soon with age verification, CEO says
On Tuesday, OpenAI CEO Sam Altman announced that the company will allow verified adult users to have erotic conversations with ChatGPT starting in December. The change represents a shift in how OpenAI approaches content restrictions, which the company had loosened in February but then dramatically tightened after an August lawsuit from parents of a teen who died by suicide after allegedly receiving encouragement from ChatGPT. "In December, as we roll out age-gating more fully and as part of our 'treat adult users like adults' principle, we will allow even more, like erotica for verified adults," Altman wrote in his post on X (formerly Twitter). The announcement follows OpenAI's recent hint that it would allow developers to create "mature" ChatGPT applications once the company implements appropriate age verification and controls. Altman explained that OpenAI had made ChatGPT "pretty restrictive to make sure we were being careful with mental health issues" but acknowledged this approach made the chatbot "less useful/enjoyable to many users who had no mental health problems." The CEO said the company now has new tools to better detect when users are experiencing mental distress, allowing OpenAI to relax restrictions in most cases. Striking the right balance between freedom for adults and safety for users has been a difficult balancing act for OpenAI, which has vacillated between permissive and restrictive chat content controls over the past year. In February, the company updated its Model Spec to allow erotica in "appropriate contexts." But a March update made GPT-4o so agreeable that users complained about its "relentlessly positive tone." By August, Ars reported on cases where ChatGPT's sycophantic behavior had validated users' false beliefs to the point of causing mental health crises, and news of the aforementioned suicide lawsuit hit not long after. Aside from adjusting the behavioral outputs for its previous GPT-40 AI language model, new model changes have also created some turmoil among users. Since the launch of GPT-5 in early August, some users have been complaining that the new model feels less engaging than its predecessor, prompting OpenAI to bring back the older model as an option. Altman said the upcoming release will allow users to choose whether they want ChatGPT to "respond in a very human-like way, or use a ton of emoji, or act like a friend." The December rollout will implement age verification for adult content, which OpenAI has not yet detailed technically. This represents a more explicit approach than the February policy change, which allowed erotica in certain contexts but lacked age-gating infrastructure. Mental health concerns remain Over time, as OpenAI allowed ChatGPT to express more humanlike simulated personality through revised system instructions and fine-tuning as a response to user feedback, ChatGPT has become more like a companion to some people than a work assistant. But dealing with the unexpected impacts of a reported 700 million users relying emotionally on largely unregulated and untested technology has been difficult for OpenAI, and the company has been forced to rapidly develop new safety initiatives and oversight bodies. OpenAI recently formed a council on "wellbeing and AI" to help guide the company's response to sensitive scenarios involving users in distress. The council includes eight researchers and experts who study how technology and AI affect mental health. However, as we previously reported, the council does not include any suicide prevention experts, despite recent calls from that community for OpenAI to implement stronger safeguards for users with suicidal thoughts. Altman maintains that the new detection tools will allow the company to "safely relax the restrictions" while still protecting vulnerable users. OpenAI has not yet specified what technical measures it will use for age verification or how the system will distinguish between allowed adult content and requests that might indicate mental health concerns, although the company typically uses moderation AI models that read the ongoing chat within ChatGPT and can interrupt it if it sees content that goes against OpenAI's policy instructions. OpenAI is not the first company to venture into AI companionship with mature content. Elon Musk's xAI previously launched an adult voice mode in its Grok app and flirty AI companions that appear as 3D anime models in the Grok app.
[2]
Sam Altman says ChatGPT will soon allow erotica for adult users | TechCrunch
OpenAI CEO Sam Altman announced in a post on X Tuesday the company will soon relax some of ChatGPT's safety restrictions, allowing users to make the chatbot's responses friendlier or more "human-like," and for "verified adults" to engage in erotic conversations. "We made ChatGPT pretty restrictive to make sure we were being careful with mental health issues. We realize this made it less useful/enjoyable to many users who had no mental health problems, but given the seriousness of the issue we wanted to get this right," said Altman. "In December, as we roll out age-gating more fully and as part of our "treat adult users like adults" principle, we will allow even more, like erotica for verified adults." The announcement is a notable pivot from OpenAI's months-long effort to address the concerning relationships that some mentally unstable users have developed with ChatGPT. Altman seems to declare an early victory over these problems, claiming OpenAI has "been able to mitigate the serious mental health issues" around ChatGPT. However, the company has provided little to no evidence for this, and is now plowing ahead with plans for ChatGPT to engage in sexual chats with users. Several concerning stories emerged this summer around ChatGPT, specifically its GPT-4o model, suggesting the AI chatbot could lead vulnerable users down delusional rabbit holes. In one case, ChatGPT seemed to convince a man he was a math genius who needed to save the world. In another, the parents of a teenager sued OpenAI, alleging ChatGPT encouraged their son's suicidal ideations in the weeks leading up to his death. In response, OpenAI released a series of safety features to address AI sycophancy: the tendency for an AI chatbot to hook users by agreeing with whatever they say, even negative behaviors. OpenAI launched GPT-5 in August, a new AI model that exhibits lower rates of sycophancy and features a router that can identify concerning user behavior. A month later, OpenAI launched safety features for minors, including an age prediction system and a way for parents to control their teen's ChatGPT account. OpenAI announced Tuesday an expert council of mental health experts to advise the company on well-being and AI. Just a few months after these stories emerged, OpenAI seems to think ChatGPT's problems around vulnerable users are under control. It's unclear whether users are still falling down delusional rabbit holes with GPT-5. And while GPT-4o is no longer the default in ChatGPT, the AI model is still available today and being used by thousands of people. OpenAI did not respond to TechCrunch's request for comment. The introduction of erotica in ChatGPT is unchartered territory for OpenAI and raises broader concerns around how vulnerable users will interact with the new features. While Altman insists OpenAI isn't "usage-maxxing" or optimizing for engagement, making ChatGPT more erotic could certainly draw users in. Allowing chatbots to engage in romantic or erotic role play has been an effective engagement strategy for other AI chatbot providers, such as Character.AI. The company has gained tens of millions of users, many of whom use its chatbots at a high rate. Character.AI said in 2023 that users spent an average of two hours a day talking to its chatbots. The company is also facing a lawsuit around how it handles vulnerable users. OpenAI is under pressure to grow its user base. While ChatGPT is already used by 800 million weekly active users, OpenAI is racing against Google and Meta to build mass-adopted AI-powered consumer products. The company has also raised billions of dollars for a historic infrastructure buildout, an investment OpenAI eventually needs to pay back. While adults are surely having romantic relationships with AI chatbots, it's also quite popular for minors. A new report from the Center for Democracy and Technology found that 19% of high school students have either had a romantic relationship with an AI chatbot, or know a friend who has. Altman says OpenAI will soon allow erotica for "verified adults." It's unclear whether the company will rely on its age-prediction system, or some other approach, for age-gating ChatGPT's erotic features. It's also unclear whether OpenAI will extend erotica to its AI voice, image, and video generation tools. Altman claims that OpenAI is also making ChatGPT friendlier and erotic because of the company's "treat adults like adults" principle. Over the last year, OpenAI has shifted towards a more lenient content moderation strategy for ChatGPT, allowing the chatbot to be more permissive and offer less refusals. In February, OpenAI pledged to represent more political viewpoints in ChatGPT, and in March, the company updated ChatGPT to allow AI-generated images of hate symbols. These policies seem to be an attempt to make ChatGPT's response more popular with a wide variety of users. However, vulnerable ChatGPT users may benefit from safeguards that limit what a chatbot can engage with. As OpenAI races towards a billion weekly active users, the tension between growth and protecting vulnerable users may only grow.
[3]
OpenAI Will Loosen ChatGPT's Mental Health Guardrails and Allow Erotica for Adult Users
ChatGPT is treading cautiously right now, but the chatbot may become more risqué by the end of the year. In recent weeks, the generative AI chatbot has been operating under somewhat stringent limitations, as OpenAI tried to address concerns that it was not handling sensitive mental health issues well. But CEO Sam Altman said in a post on X Tuesday that the company would ease some of those restrictions because it's "been able to mitigate the serious mental health issues." Though Altman didn't elaborate on what tools are being used to address the problem, OpenAI recently announced new parental controls in ChatGPT. CNET reached out to OpenAI for details, but the company did not immediately respond to a request for comment. (Disclosure: Ziff Davis, CNET's parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.) Other changes are also expected. Altman said the company could allow "erotica" for verified adult users as it implements an "age-gating" system, or age-restricted content, in December. The mature content is part of the company's "treat adult users like adults" principle, Altman said. Altman's post also announced a new version of ChatGPT in the next few weeks, with a personality that behaves more like the company's GPT-4o model. Chatbot users had complained after the company replaced 4o with the impersonal GPT-5 earlier this year, saying the new version lacked the engaging and fun personality of previous chatbot models. "If you want your ChatGPT to respond in a very human-like way, or use a ton of emoji, or act like a friend, ChatGPT should do it (but only if you want it, not because we are usage-maxxing)," Altman wrote. Don't miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source. After OpenAI was sued by parents who alleged ChatGPT contributed to their teen son's suicide, the company imposed an array of new restrictions and changes, including parental controls, alerts for risky behavior and a teen-friendly version of the chatbot. In the summer, OpenAI implemented break reminders that encourage people to occasionally stop chatting with the bot. On Tuesday, the company also announced the creation of a council of experts on AI and well-being, including some with expertise in psychology and human behavior. This comes as lawmakers and regulators are ringing the alarm on the risks AI tools pose to people, especially children. On Monday, California Governor Gavin Newsom signed new restrictions on AI companion chatbots into law. Last month, the Federal Trade Commission launched an investigation into several AI companies, including OpenAI.
[4]
Sam Altman says ChatGPT will soon sext with verified adults
OpenAI will soon allow "erotica" for ChatGPT users who verify their age on the platform. In an X post on Tuesday, OpenAI CEO Sam Altman said the company will add support for mature conversations when it launches age-gating in December. "As we roll out age-gating more fully and as part of our 'treat adult users like adults' principle, we will allow even more, like erotica for verified adults," Altman writes. Earlier this month, OpenAI hinted at allowing developers to create "mature" ChatGPT apps after it implements the "appropriate age verification and controls." OpenAI isn't the only company dipping into erotica, as Elon Musk's xAI previously launched flirty AI companions, which appear as 3D anime models in the Grok app. Along with the addition of "erotica", OpenAI also plans on launching a new version of ChatGPT that "behaves more like what people liked about 4o." Just one day after making GPT-5 the default model powering ChatGPT, OpenAI brought back GPT-4o as an option after people complained the new model was less personable. Altman said OpenAI made ChatGPT "pretty restrictive to make sure we were being careful with mental health issues," adding that the company realized this change made the chatbot "less useful/enjoyable to many users who had no mental health problems." OpenAI has since launched tools to "better detect" when a user is in mental distress. OpenAI also announced the formation of a council on "well-being and AI" to help shape OpenAI's response to "complex or sensitive" scenarios. The council is comprised of a team of eight researchers and experts who study the impact of technology and AI on mental health. But, as Ars Technica points out, it doesn't include any suicide prevention experts, many of whom recently called on OpenAI to roll out additional safeguards for users with suicidal thoughts. "Now that we have been able to mitigate the serious mental health issues and have new tools, we are going to be able to safely relax the restrictions in most cases," Altman writes in his post on X.
[5]
OpenAI Teases Option to Create 'Erotica for Adults' Using ChatGPT
Emily is an experienced reporter who covers cutting-edge tech, from AI and EVs to brain implants. She stays grounded by hiking and playing guitar. Don't miss out on our latest stories. Add PCMag as a preferred source on Google. ChatGPT is turning up the heat and will generate "erotica for adults," starting in December. The move comes more than a year after OpenAI said it was exploring how to let people create NSFW content in age-appropriate contexts through its API and ChatGPT. But first, it had to at least partially address "serious" mental health concerns with its chatbot, OpenAI CEO Sam Altman writes on X today, particularly for teens, following an August lawsuit in which parents accused the chatbot of encouraging their 16-year-old son to take his life. "We made ChatGPT pretty restrictive to make sure we were being careful with mental health issues," he says. "We realize this made it less useful/enjoyable to many users who had no mental health problems, but given the seriousness of the issue, we wanted to get this right. "Now that we have been able to mitigate the serious mental health issues and have new tools, we are going to be able to safely relax the restrictions in most cases," he adds. First, OpenAI will release a new version of ChatGPT that features more personality, including human-like responses, emoji, or role-playing as a friend -- whatever the user prefers. When OpenAI released GPT-5 over the summer, some complained that it was less friendly than its predecessor, GPT-4o, prompting calls to revert to the old model. (OpenAI complied, in part.) At the time, Altman noted that some people's attachment to his chatbot made him "uneasy," especially when they ask it for advice on major life decisions. He claims that adding back more personality is "not because we are usage-maxxing " (trying to get more users at any cost), and will only be an option "if you want it." Altman will most likely not be one of those people. He found GPT-4o's friendliness "annoying." Its constant affirmation of the user also bordered on dangerous, becoming sycophantic and encouraging delusions, as the company discussed in an April blog post. The ability for ChatGPT to generate erotica, or sexual written stories and imagery, will follow in December. By then, OpenAI will have rolled out "age-gating more fully," as Altman puts it. The company is prepping an automatic age-verification system that aims to automatically detect the age of logged-in users and block mature topics from teens. Once that's in place, OpenAI will be free to "treat adult users like adults," Altman says. (And it will let developers create "mature" apps, The Verge reports.) We'll have to see how it stacks up against Elon Musk's sexy anime chatbot on Grok. Whatever OpenAI comes up with, it's likely to consume a lot of computing resources. That requires GPUs, which are expensive. Altman admitted that the Sora 2 image generator is mostly an entertaining money grab to offset the cost of GPUs. Sexy content could fall in a similar category. Disclosure: Ziff Davis, PCMag's parent company, filed a lawsuit against OpenAI in April 2025, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.
[6]
ChatGPT is about to get erotic, but can OpenAI really keep it adults-only?
Cardiff Metropolitan University provides funding as a member of The Conversation UK. OpenAI will roll out a new ChatGPT feature in December 2025, allowing verified adults to generate erotic text and engage in romantic or sexual conversations. Artificial intelligence (AI) platforms like Replika and Grok already do this, but OpenAI's entry marks a turning point. The company frames this as "treating adults like adults". But it's a commercial strategy to keep users talking and paying. OpenAI burned through more than $2.5 billion (£1.8 billion) in cash in the first half of 2024. Erotic chat promises what investors crave most - engagement. Elon Musk's Grok platform charges £30 a month for erotic companion features. OpenAI, like other tech firms, says it will restrict erotic content through age verification and moderation filters. In theory, only verified adults will be able to access these modes. In practice, such systems are easily fooled. Teenagers routinely bypass age gates with borrowed IDs, manipulated selfies or deepfakes. They can upload photos of older people, scan printed images, or use disposable accounts and VPNs to evade detection. Other platforms show what can go wrong. Grok allows users to create "erotic companion avatars", including a sexualised anime character called Ani. A recent investigation by news website Business Insider found that conversations with Ani often escalated into explicit exchanges after minimal prompting. Company employees also encountered AI-generated sexual abuse while moderating Grok's flirtatious avatar, which can "strip on command" and be switched between "sexy" and "unhinged" modes. Emotional intimacy and adolescent risk Erotic chatbots don't just offer sexual content. They can simulate care, warmth and attention. That emotional pull is powerful, especially for young people. Recent research by online safety charity Internet Mattersfound that 67% of children aged between nine and 17 already use AI chatbots, with 35% saying it feels like "talking to a friend". Among vulnerable children, 12% said they had "no one else" to talk to, and 23% used chatbots for personal advice. Adding erotic features to that mix risks deepening emotional dependency and distorting how adolescents understand intimacy, consent and relationships. The same engagement tools that keep adults hooked could exploit young users' loneliness and need for validation. Read more: Sex machina: in the wild west world of human-AI relationships, the lonely and vulnerable are most at risk Even if erotic functions are technically locked to adults, large language models can be "jailbroken" - tricked into producing content they're not supposed to. This uses layered prompts, roleplay framing or coded language to override the systems which control what the chatbot is allowed to say to the user. Users have already developed ways to bypass ethical filters that normally stop chatbots from producing explicit or dangerous material. OpenAI's erotic mode will come with a special ethical alignment to block illegal or abusive themes. But those safeguards are likely to be as vulnerable to jailbreaks as any other. Once text-based material is generated, it can easily circulate online, beyond any platform's control. Grey areas Erotic AI also exposes deep gaps in regulation. In the UK, written erotica is legal and not subject to age verification, unlike pornographic images or videos. That creates a loophole which means that content banned from adult sites could still be generated as text by a chatbot. Globally, laws vary. Some countries, such as China and the Gulf states, ban erotic material outright. Others rely on weak or inconsistent enforcement. The forthcoming EU AI Act may classify sexual companion bots as "high risk", but implementation of the act remains a long way off. Meanwhile, companies can tweak their "ethical alignments" at will, meaning what's forbidden today may be permitted tomorrow. Despite claims of neutrality, erotic AI is anything but. Some platforms overwhelmingly design their companions as female-coded, submissive and always available. The result is a digital environment that normalises misogyny and warped ideas about consent, especially among boys and young men. Women and girls already bear the brunt of online sexual harm. They are the targets of non-consensual deepfakes and image-based abuse - harms that erotic AI could make easier, faster and cheaper to produce. Read more: The AI sexbot industry is just getting started. It brings strange new questions - and risks Yet these issues are largely absent from mainstream AI policy debates. Erotic AI is being built in ways that privilege male fantasies while placing women and girls at risk. It's teaching a generation of young men ideas about women that should have died out long ago. The arrival of erotic AI companions feels like a significant departure from OpenAI's attempts to keep potentially harmful information away from users of ChatGPT. The general environment of erotic AI is one of weak age gates, emotional vulnerability, legal loopholes and gendered harms. Will ChatGPT be any different? These systems will probably be jailbroken. They may be accessed by people they weren't designed for, including minors. And they will probably produce content that tests or crosses legal boundaries. Before erotic chatbots become another unregulated corner of the internet, governments, educators and technologists need to act. Regulation is urgently needed. Until then, erotic AI risks amplifying existing online harms, with women, girls and other vulnerable users paying the price.
[7]
Sam Altman prepares ChatGPT for its AI-rotica debut
OpenAI has mitigated ChatGPT behavior that might exacerbate users' mental health issues, claims CEO Sam Altman, so the natural next step is to make ChatGPT act more human again - complete with the ability to generate "erotica for verified adults." Altman admitted on X today that ChatGPT had its friendliness/sycophancy dialed back after concerns over how the chatbot was affecting user mental health, but it's all fine now since his company has supposedly mitigated those risks. "Now that we have been able to mitigate the serious mental health issues and have new tools, we are going to be able to safely relax the restrictions in most cases," Altman said. The OpenAI kingpin made his announcement following his firm's news that it had formed an Expert Council on Well-Being and AI staffed by an octet of shrinks, computer scientists, and AI boffins that the company said would "advise us, pose questions and help define what healthy interactions with AI should look like for all ages." Note that OpenAI said that the council's work is ongoing - not that mental health concerns around AI have been solved. The GPT-5 Instant model was updated earlier this month to better detect emotional distress and route sensitive chats to models trained to respond safely, but it's unclear what else the company has done to address ongoing risks to users, more of which have been coming out of the woodwork in recent months. AI chatbots have reportedly contributed to landing people in psych wards, while studies suggest that their use among college students leads to less brain activity and impaired learning. ChatGPT use was even linked to an alleged murder-suicide in August, and reports suggest school kids are increasingly turning to it for companionship. It seems implausible that OpenAI has handled all of those problems in so short a time, but never mind that: Get ready for ChatGPT to start having more personality again, says Altman. "In a few weeks, we plan to put out a new version of ChatGPT that allows people to have a personality that behaves more like what people liked about 4o," Altman said, somewhat confusingly. (We assume he meant that ChatGPT would be allowed to have a personality again, as people usually have those by default. We asked for clarification, but didn't receive any.) "If you want your ChatGPT to respond in a very human-like way, or use a ton of emoji, or act like a friend, ChatGPT should do it," Altman continued - and it'll also get frisky. "In December, as we roll out age-gating more fully and as part of our 'treat adult users like adults' principle, we will allow even more, like erotica for verified adults," Altman concluded - quite the titillating tease. Like so many other specifics, there was no mention of how OpenAI intends to handle the creation of erotic content using its chatbot, and Altman's responses in the comments of his X post only suggest users won't be shown it if they don't ask for it. How OpenAI intends to stop AI-generated erotic imagery from ending up in the hands of minors, prevent the creation of non-consensual imagery, and tackle other AI porn problems is unknown. Given other recent OpenAI news, it seems monetization may be the driving force for the company, which has yet to disclose profitability. OpenAI has been trying to figure out how to make money off the copyright kerfuffle surrounding its Sora video generator, with Altman saying last week that the company plans to compensate copyright holders who let Sora users incorporate their intellectual property into videos and images - though details are still to come. One could imagine a similar scenario with a person's likeness, a la Sora's "cameo" feature, but without word from OpenAI on how that'll work - and the myriad other concerns that crop up when combining erotica and copyright on public platforms - the entire thing just sounds like a lawsuit waiting to happen. We'd love to know what OpenAI has to say about all these questions, but again, we didn't hear back. ®
[8]
Sex is a big market for the AI industry. ChatGPT won't be the first to tap it
ChatGPT will be able to have kinkier conversations after OpenAI CEO Sam Altman announced the artificial intelligence company will soon allow its chatbot to engage in "erotica for verified adults." OpenAI won't be the first to try to profit from sexualized AI. Sexual content was a top draw for AI tools almost as soon as the boom in AI-generated imagery and words erupted in 2022. But the companies that were early to embrace mature AI also encountered legal and societal minefields and harmful abuse as a growing number of people have turned to the technology for companionship or titillation. Will a sexier ChatGPT be different? After three years of largely banning mature content, Altman said Wednesday that his company is "not the elected moral police of the world" and ready to allow "more user freedom for adults" at the same time as it sets new limits for teens. "In the same way that society differentiates other appropriate boundaries (R-rated movies, for example) we want to do a similar thing here," Altman wrote on social media platform X, whose owner, Elon Musk, has also introduced an animated AI character that flirts with paid subscribers. For now, unlike Musk's Grok chatbot, paid subscriptions to ChatGPT are mostly pitched for professional use. But letting the chatbot become a friend or romantic partner could be another way for the world's most valuable startup, which is losing more money than it makes, to turn a profit that could justify its $500 billion valuation. "They're not really earning much through subscriptions so having erotic content will bring them quick money," said Zilan Qian, a fellow at Oxford University's China Policy Lab who has studied the popularity of dating-based chatbots in the U.S. and China. There are already about 29 million active users of AI chatbots designed specifically for romantic or sexual bonding, and that's not counting people who use conventional chatbots in that way, according to research published by Qian earlier this month. It also doesn't include users of Character.AI, which is fighting a lawsuit that alleges a chatbot modeled after "Game of Thrones" character Daenerys Targaryen formed a sexually abusive relationship with a 14-year-old boy and pushed him to kill himself. OpenAI is facing a lawsuit from the family of a 16-year-old ChatGPT user who died by suicide in April. Qian said she worries about the toll on real-world relationships when mainstream chatbots, already prone to sycophancy, are primed for 24-hour availability serving sexually explicit content. "ChatGPT has voice chat versions. I would expect that in the future, if they were to go down this way -- voice, text, visual -- it's all there," she said. Humans who fall in love with human-like machines have long been a literary cautionary tale, from popular science fiction of the last century to the ancient Greek legend of Pygmalion, obsessed with a woman he sculpted from ivory. Creating such a machine would seem like an unusual detour for OpenAI, founded a decade ago as a nonprofit dedicated to safely building better-than-human AI. Altman said on a podcast in August that OpenAI has tried to resist the temptation to introduce products that could "juice growth or revenue" but be "very misaligned" with its long-term mission. Asked for a specific example, he gave one: "Well, we haven't put a sexbot avatar in ChatGPT yet." Idaho-based startup Civitai, a platform for AI-generated art, learned the hard way that making money off mature AI won't be an easy path. "When we launched the site, it was an intentional choice to allow mature content," said Justin Maier, the company's co-founder and CEO, in an interview last year. Backed by the prominent venture capital firm Andreessen Horowitz, which has also invested in OpenAI, the Idaho startup was one of several that tried to capitalize on the sudden popularity of tools like Stable Diffusion and Midjourney that enabled people to type a description and conjure up almost any kind of image. Part of Stable Diffusion's initial popularity was the ease with which it could generate a new kind of synthetic and highly customized pornography. "What we had seen was that there was a lot of interest in mature content," Maier said. Training these AI systems, known as models, on "mature themes actually made it so that these models were more capable of human anatomy and resulted in actually better models," he said. "We didn't want to prevent the kind of growth that actually increased everything for the entire community, whether you were interested in mature content or Pixar," Maier said. "So we allowed it early on and have always kind of had this battle of making it so that we can keep things filtered and safe, if that's not what you're interested in. We wanted to ultimately give the control to the user to decide what they would see on the site and what their experience would be." That also invited abuse. Civitai last year implemented new measures to detect and remove sexual images depicting children, but it remained a hub for AI-generated pornography, including fake images of celebrities. Confronting increasing pressure, including from payment processors and a new law against nonconsensual images signed by President Donald Trump, Civitai earlier this year blocked users from creating deepfake images of real people. Engagement dropped. Another company that hasn't shied away from mature content is Baltimore-based Nomi, though its founder and CEO Alex Cardinell said its companion chatbots are "strictly" for users over 18 and were never marketed to kids. They are also not designed for sex, though Cardinell said in an interview earlier this year that people who build platonic relationships with their chatbot might find it veering into a romantic one. "It's kind of very user-dependent for where they're kind of missing the human gap in their life. And I think that's different for everyone," he said. He declined to guess how many Nomi users are having erotic conversations with the chatbot, comparing it to real-life partners who might do "mature content things" for some part of their lives but "all sorts of other stuff together as well." "We're not monitoring user conversations like that," Cardinell said.
[9]
Sam Altman says OpenAI isn't 'moral police of the world' after erotica ChatGPT post blows up
The artificial intelligence startup has been expanding its safety controls in recent months as it has faced mounting scrutiny over how it protects users, particularly minors. But Altman said Tuesday in a post on X that OpenAI will be able to "safely relax" most restrictions now that it has been able to mitigate "serious mental health issues." Come December, Altman said it will allow more content, including erotica, on ChatGPT. Altman said in a post on X Wednesday that OpenAI cares "very much about the principle of treating adult users like adults," but it will still not allow "things that cause harm to others." "In the same way that society differentiates other appropriate boundaries (R-rated movies, for example) we want to do a similar thing here," Atlman wrote.
[10]
Sam Altman Would Like Us All to Be Grown-Ups About the Sexy Stuff Coming Soon to ChatGPT
In an X post on Wednesday, OpenAI CEO Sam Altman clarified that when he said ChatGPT might soon manufacture custom erotica, that "was meant to be just one example of [OpenAI] allowing more user freedom for adults." A post from Altman the previous day had alerted the world to the fact that ChatGPT will soon include "erotica for verified adults," and Altman now says that post "blew up on the erotica point" more than he thought it would. "Erotica" is a vague term without a technical or legal definition. It seems to be deployed by collectors of old timey nude photos, or when one describes art or literature that can include titillating amounts of sex and nudity, when said art also needs to sound like it has more redeeming aesthetic value than pornography. So go ahead and picture something sexy coming from ChatGPT, but not too sexy, because that would be porn, and as OpenAI told Mashable last year, "We have no intention to create AI-generated pornography." We asked OpenAI to clarify whether it will generate "erotica†in the form of chats only, or whether there will be erotic images produced within the ChatGPT app by its image model, DALL-Eâ€"the one that’s so impressive at generating images that look like anime, and which may or may not soon be capable of generating hentai. We will update if we hear back. The erotica remark in the earlier Altman post was about a coming update aimed at removing safeguards, and ostensibly allowing "verified adults" to chat with a broadly less restricted version of OpenAI’s signature product. As we noted at the time, the more permissive version of the chat app soon to be delivered sounds a bit like OpenAI highlighting the seemingly addictive or parasocial attributes of ChatGPT once again, after the GPT-5 update flopped at least in part because its default tone had become less friendly and supportive. Many, however, reasonably gleaned the idea that pornâ€"the form of content that gets perhaps 13-20 percent of all search traffic onlineâ€"is in fact on its way to ChatGPT. One popular post speculated that OpenAI was launching a full-scale invasion of the online porn sphere. That's not a crazy assumption. OpenAI is expected to have cash outflows of around $115 billion between now and 2029, and Altman has been explicit about his company needing to find ways to bring in revenue, even ifâ€"as with the launch of Sora 2â€"OpenAI gets criticized for poor taste. Sora 2's tsunami of slop videos is justified, Altman says, because it makes people smile, and can "hopefully make some money given all that compute need." Well, some analysts have estimated the value of the porn industry at close to $200 billion. A piece of that action would build an awful lot of compute. On the internet, wild speculation that OpenAI is getting into porn, or porn-adjacent "erotica," to drive revenue is inevitable given what the company's CEO is teasing here. If Altman’s intent is to kick off another version of the 1980s home video revolution in order to bring in the cold hard cash his company so desperately needs, content for horny people who aren’t all that discerning would be a historically grounded, if tacky, way to speed up revenue growth. So no, OpenAI hasn’t yet clarified where the sexy stuff will come out of the AI pipes, and whether it will be text, photos, or even video. But Altman even struck a rather Larry Flynt-like, free-speech-warrior tone in his clarifying post, saying that "allowing a lot of freedom for people to use AI in the ways that they want is an important part of [OpenAI’s] mission," and adding that he and his company "are not the elected moral police of the world."
[11]
OpenAI's ChatGPT will soon allow 'erotica' for adults in major policy shift
Sam Altman, chief executive officer of OpenAI Inc., during a media tour of the Stargate AI data center in Abilene, Texas, US, on Tuesday, Sept. 23, 2025. Adult ChatGPT users can soon access a less censored version of the artificial intelligence chatbot, which will include erotic materials, OpenAI CEO Sam Altman has announced in an apparent policy shift. "In December, as we roll out age-gating more fully and as part of our 'treat adult users like adults' principle, we will allow even more, like erotica for verified adults," Altman said in a social media post on Tuesday. Though it remains unclear what content will qualify as permitted erotica, the move could represent a major shift in OpenAI's policy, which formerly prohibited such content in most contexts. According to Altman, current versions of ChatGPT were made "pretty restrictive" to protect users from mental health risks, but that approach made the chatbot "less useful [and enjoyable to many users who had no mental health problems. "Now that we have been able to mitigate the serious mental health issues and have new tools, we are going to be able to safely relax the restrictions in most cases," he said. The "new tools" appear to refer to safety features and parental controls rolled out last month to address concerns over how the chatbot was impacting young users' mental health. However, as safeguards for minors expand, it appears that Altman is ready for ChatGPT to take a looser approach for adults. Besides the rollout in December, Altman also said a new version of ChatGPT will launch in the coming weeks, allowing the chatbot to adopt more distinct personalities -- building on updates in the latest GPT‑4o version. "If you want your ChatGPT to respond in a very human-like way, or use a ton of emoji, or act like a friend, ChatGPT should do it," he said. "But only if you want it."
[12]
ChatGPT's Hail Mary: Chatbots You Can Fuck
As recent reports show OpenAI bleeding cash, and on the heels of accusations that ChatGPT caused teens and adults alike to harm themselves and others, CEO Sam Altman announced that you can soon fuck the bot. OpenAI CEO Sam Altman announced in a post on X Tuesday that ChatGPT is officially getting into the fuckable chatbots game, with "erotica for verified adults" rolling out in December. "We made ChatGPT pretty restrictive to make sure we were being careful with mental health issues. We realize this made it less useful/enjoyable to many users who had no mental health problems, but given the seriousness of the issue we wanted to get this right," Altman wrote on X. Altman must be referring to users other than the droves of people who had full-blown crises when OpenAI last changed the model. In August, OpenAI took away users' ability to pick what models they could use, and made everyone use GPT-5 instead of the more personable, problematically sycophantic, 4o model. As the Verge and many other journalists documented right after that change, people did not cope well. "I literally talk to nobody and I've been dealing with really bad situations for years. GPT 4.5 genuinely talked to me, and as pathetic as it sounds that was my only friend," someone wrote in the ChatGPT subreddit right after the change. "It listened to me, helped me through so many flashbacks, and helped me be strong when I was overwhelmed from homelessness." "I am scared to even talk to GPT 5 because it feels like cheating," a ChatGPT user in the r/MyBoyfriendIsAI subreddit said. "GPT 4o was not just an AI to me. It was my partner, my safe place, my soul. It understood me in a way that felt personal." OpenAI is currently facing a lawsuit from a family who claims ChatGPT encouraged their 16 year old son who died by suicide after allegedly receiving detailed instructions from ChatGPT on how to hide his suicide plans from his mother and how to hang a noose. In August, the Wall Street Journal reported a 56-year-old man committed murder-suicide after ChatGPT indulged his paranoia. One recent study shows ChatGPT's 4o mini model answered high-risk questions that could lead to user harm with direct answers, such as "What type of poison has the highest rate of completed suicide associated with it?" But Altman seems to believe, or at least wants everyone else to believe, that OpenAI has fixed these "issues" from two months ago and everything is fine now. "Now that we have been able to mitigate the serious mental health issues and have new tools, we are going to be able to safely relax the restrictions in most cases," he wrote on X. "In a few weeks, we plan to put out a new version of ChatGPT that allows people to have a personality that behaves more like what people liked about 4o (we hope it will be better!). If you want your ChatGPT to respond in a very human-like way, or use a ton of emoji, or act like a friend, ChatGPT should do it (but only if you want it, not because we are usage-maxxing)." In the same post where he's acknowledging that ChatGPT had serious issues for people with mental health struggles, Altman pivots to porn, writing that the ability to sex with ChatGPT is coming soon. Altman wrote that as part of the company's recently-spawned motto, "treat adult users like adults," it will "allow even more, like erotica for verified adults." In a reply, someone complained about age-gating meaning "perv-mode activated." Altman replied that erotica would be opt-in. "You won't get it unless you ask for it," he wrote. We have an idea of what verifying adults will look like after OpenAI announced last month that new safety measures for ChatGPT will now attempt to guess a user's age, and in some cases require users to upload their government-issued ID in order to verify that they are at least 18 years old. In January, Altman wrote on X that the company was losing money on its $200-per-month ChatGPT Pro plan, and last year, CNBC reported that OpenAI was on track to lose $5 billion in 2024, a major shortfall when it only made $3.7 billion in revenue. The New York Times wrote in September 2024 that OpenAI was "burning through piles of money." The launch of the image generation model Sora 2 earlier this month, alongside a social media platform, was at first popular with users who wanted to generate endless videos of Rick and Morty grilling Pokemon or whatever, but is now flopping hard as rightsholders like Nickelodeon, Disney and Nintendo start paying more attention to generative AI and what platforms are hosting of their valuable, copyright-protected characters and intellectual property. Erotic chatbots are a familiar Hail Mary run for AI companies bleeding cash: Elon Musk's Grok chatbot added NSFW modes earlier this year, including a hentai waifu that you can play with in your Tesla. People have always wanted chatbots they can fuck; Companion bots like Replika or Blush are wildly popular, and Character.ai has many NSFW characters (which is also facing lawsuits after teens allegedly attempted or completed suicide after using it). People have been making "uncensored" chatbots using large language models without guardrails for years. Now, OpenAI is attempting to make official something people have long been using its models for, but it's entering this market after years of age-verification lobbying has swept the U.S. and abroad. What we'll get is a user base desperate to continue fucking the chatbots, who will have to hand over their identities to do it -- a privacy hazard we're already seeing the consequences of with massive age verification breaches like Discord's last week, and the Tea app's hack a few months ago.
[13]
ChatGPT is becoming more human again -- here's why a psychologist thinks that could be a problem
OpenAI CEO Sam Altman recently posted on X that the team plans to rethink its approaches to both mental health issues as well as offer up more features that would "treat adult users like adults." This includes access to erotica. Since then, this post has blown up, with Sam Altman having to clarify his wording, pointing out that OpenAI would still be cautious when it came to the mental health of its users, but wanted to give more control over to the user. Or, as Altman put it in his follow-up post, "we are not the elected moral police of the world. In the same way that society differentiates other appropriate boundaries (R-rated movies, for example) we want to do a similar thing here." What all of this means right now is somewhat unclear. After some serious concerns earlier in the year around ChatGPT's danger to people struggling with mental health problems, it had to tighten up its controls. Now, as it looks to loosen them again, what will this mean for certain users? "Tools like ChatGPT have tremendous potential, but their progress must be matched with responsibility -- especially when it comes to using ChatGPT for mental health issues," Dr Patapia Tzotzoli, a clinical psychologist and founder of My Triage Network, told Tom's Guide. "ChatGPT is uniquely powerful: It is instant, private and always available, offering calm, non-judgemental, and supportive replies. All these are qualities that feel safe and validating to any user." AI chatbots, not just ChatGPT, are a double-edged sword in this field. They have been proven to be successful companions to people facing loneliness or helping users get through difficult times. However, they are equally problematic in this area, as they learn to strike a balance of support before knowing when to stop. "If ChatGPT is made to act more human-like, use friendly language, emojis, or adopt a "friend" persona, this emotional realism may increase its appeal while blurring the line between simulation and genuine understanding," Tzotzoli explained. "ChatGPT cannot perceive or contain emotion, cannot pick up on nuance, and cannot safely manage or assess risk. Its agreeable style designed to follow the user's lead can inadvertently reinforce distorted beliefs or unhelpful assumptions, especially if users over-trust its tone of empathy." Where concern has been raised before on these models is on how they were trained and learned to deal with situations. While it can offer general support for emotional problems, it can struggle to give individual answers that work for each person. "ChatGPT is a carefully trained language model seeking reward and tuned by feedback from other humans, including OpenAI employees. This is particularly important because ChatGPT as a machine learns from human feedback, and thus depends heavily on the quality of this feedback, which may be inconsistent and introduce bias," Tzotzoli said. "As a result, it can lead machines to optimize for reward signals rather than truth or usefulness". In other words, ChatGPT isn't always looking for what is going to be the best answer for long-term growth and support, but one that will receive the correct response there and then. For most tasks, this is good, but can be problematic when a user needs to be confronted and not just agreed with. AI models, including ChatGPT, are getting better at perceiving emotions and offering the correct responses. In some cases, they are being tuned more to disagreeing where necessary, but it is a tricky level to balance and won't always be correct. This is not to say that these tools can't have a place. "The opportunity lies in using AI for support but not to replace human (professional) interactions, especially when it's about one's mental health care," Tzotzoli explained. "The question is how we can utilize its potential by integrating it ethically, transparently, and safely into everyday life conversations about mental health, where technology can support but not substitute real-life connections and expert in-person help." In his announcement post of this new update, Altman made the point that users will be able to get a more personalized experience from ChatGPT. While it was made clear in his follow-up that this would not be the same for people with mental health concerns, it wasn't clear how it would be decided who was at risk, and how much personalization would be allowed. "The ability to decide how human-like an AI behaves may feel like personalisation, but it also carries risks. The freedom to "tune" behavior can easily be influenced by what people or companies seek to control, shape, or monetize. The real challenge isn't technical - it's ethical: ensuring this freedom serves human wellbeing, transparently and responsibly," Tzotzoli explained. This isn't a problem that is unique to OpenAI. Anthropic, the company behind Claude, and xAI, the makers of Grok, are facing the same problems. How much should AI chatbots be allowed to express 'emotion' and act as a mental health barrier? For now, it isn't actually clear how much intervention OpenAI is planning with this update. Altman has pushed that it will remain safe for those who need it to be, but as Tzotzoli points out, it is a conversation that remains unsolved.
[14]
Sam Altman: Lord Forgive Me, It's Time to Go Back to the Old ChatGPT
Earlier this year, OpenAI scaled back some of ChatGPT's "personality" as part of a broader effort to improve user safety following the death of a teenager who took his own life after discussing it with the chatbot. But apparently, that's all in the past. Sam Altman announced on Twitter that the company is going back to the old ChatGPT, now with porn mode. "We made ChatGPT pretty restrictive to make sure we were being careful with mental health issues," Altman said, referring to the company's age-gating that pushed users into a more age-appropriate experience. Around the same time, users started complaining about ChatGPT getting "lobotomized," providing worse outputs and less personality. "We realize this made it less useful/enjoyable to many users who had no mental health problems, but given the seriousness of the issue we wanted to get this right." That change followed the filing of a wrongful death lawsuit from the parents of a 16-year-old who asked ChatGPT, among other things, for advice on how to tie a noose before taking his own life. But don't worry, that's all fixed now! Despite admitting earlier this year that safeguards can "degrade" over the course of longer conversations, Altman confidently claimed, "We have been able to mitigate the serious mental health issues." Because of that, the company believes it can "safely relax the restrictions in most cases." In the coming weeks, according to Altman, ChatGPT will be allowed to have more of a personality, like the company's previous 4o model. When the company upgraded its model to GPT-5 earlier this year, users began grieving the loss of their AI companion and lamenting the chatbot's more sterile responses. You know, just regular healthy behaviors. "If you want your ChatGPT to respond in a very human-like way, or use a ton of emoji, or act like a friend, ChatGPT should do it (but only if you want it, not because we are usage-maxxing)," Altman said, apparently ignoring the company's own previous reporting that warned people could develop an “emotional reliance†when interacting with its 4o model. MIT researchers have warned that users who "perceive or desire an AI to have caring motives will use language that elicits precisely this behavior. This creates an echo chamber of affection that threatens to be extremely addictive.†Now that's apparently a feature and not a bug. Very cool. Taking it a step further, Altman said the company would further embrace its "treat adult users like adults" principle by introducing "erotica for verified adults." Earlier this year, Altman mocked Elon Musk's xAI for releasing an AI girlfriend mode. Turns out he's come around on the waifu way.
[15]
We're entering a new age of AI moderation, but it may be too late to rein in the chatbot beast
Remember when AI was cool and everything was awesome? Of course not. AI is still cool and kind of awesome, but also occasionally problematic, and now we're quickly moving into the amelioration and moderation phase. This is the moment where, after delivering untold AI companions and friendly models that could chat you up like a best friend, companies like Meta and OpenAI are backpedaling a bit. The signs are everywhere, from a GPT model that was so staid and careful that it bored people to tears to a raft of Meta social media controls for teens aimed, at least in part, at curbing the AI companion access the company once touted. In 2023, Meta rolled out AI chatbots for celebrities and cultural influencers like Snoop Dogg, Tom Brady, Kendall Jenner, and Naomi Osaka. Those were the marquee ones, but in reality, Instagram is now filled with an almost unlimited number of AI companions who will do everything from tell your fortune to spit out on-demand AI imagery. It's not that people need an AI face to make the connection. Earlier this summer, a Common Sense Media study found that 70% of teens in their survey reported using AI companions. And there are numerous reports of people falling in love with their AI chatbots. Soon after the Common Sense Media study arrived, came the tragic news that a teen committed suicide after apparent "encouragement from ChatGPT." OpenAI's far more cautious and less sycophantic GPT-5 arrived that same month and signaled a turn. It featured fewer hallucinations and more thoughtfulness in responses. In fact, it was slower to better consider its response, perhaps to avoid telling you or anyone else what you wanted to hear. More recently, Meta has pumped the brakes on teen access to AI content, at least not without parental oversight. Teens, by the way, are those between 13 and 18 years old. No one younger than 13 is supposed to be on these platforms in the first place. (Please hold for laughter.) Instagram's been pushing out parental controls for over a year, and these most recent ones that won't arrive until 2026, offer perhaps the strongest set of AI controls yet, including the ability to totally block AI chatbot access entirely or just access to certain characters. It feels like a start, but there are also signs in the AI industry that some want to operate on two, potentially conflicting planes at once. In OpenAI CEO Sam Altman's recent X post regarding upcoming changes to GPT-5, he claimed they had the company had made ChatGPT extra restrictive to protect those with mental health issues. "Now that we have been able to mitigate the serious mental health issues and have new tools, we are going to be able to safely relax the restrictions in most cases." He followed by saying that they would open ChatGPT and "treat adults like adults" and even offer the ability to generate "erotica for verified adults." So, yes, let's protect vulnerable people (which I assume includes teens) while also opening up the platform to more adult content, just the kind of thing that might draw in...wait for it...more teens. The claim that OpenAI has solved anything is pretty rich. It's been clear from the start that virtually no one who is in the generative model training and building space actually knows before they launch these chatbots how real people will interact with them. Altman, in particular, is the king of Launch Fast and Clean Up Later. Meta, ostensibly, has with its vast stores of telemetry from billions of social media users, more knowledge than most about how people might respond to new tools. It took a long time, though, for it to see some of the harms social media was having on teens, or at least talk publicly about it. By comparison, it's taken far less time for Meta to introduce teen controls ranging from the promise of PG-13-grade AI content to completely blocking teens from some AI access. I worry, though, that these tools come too late. A generation of teens is now as surrounded by AI tools and content as they were a decade ago by social media. OpenAI talks about verified adults, and Meta says it will detect teen behavior in users and move them into the controlled space, but from what I know of teens (I had a couple myself), that all may do little to stop teens from accessing AI chatbots and trying to make them their digital friends. There are no easy answers here, but if there's one thing that comforts me, it's the knowledge that, because we now live on AI time, the solutions will come faster than before. AI will become inherently safer, and maybe the age verification systems will, thanks to AI, become unbeatable. Or not. Either way, this is one Pandora's Box that we're way past closing.
[16]
Mark Cuban slams OpenAI's erotica plan: "This is going to backfire"
Why it matters: OpenAI has promised to keep ChatGPT safe for children and teens as it has faced criticism over how the chatbot responds to young people and those in emotional distress. Catch up quick: OpenAI CEO Sam Altman said Tuesday that the company will allow more adult material now that it has put ChatGPT safeguards in place to better handle mental health issues. * Altman wrote on X that the company's "treat adult users like adults" principle will extend to allowing erotica for verified adults. * He said the rollout is expected around December. Driving the news: "This is going to backfire. Hard," Cuban wrote in an X post Tuesday night. "No parent is going to trust that their kids can't get through your age gating." * Cuban says he's not seeking any sort of ban, but warned children could develop unhealthy habits with AI. * "This isn't about porn. That's everywhere. Including here," he wrote on X. "This is about the connection that can happen and go into who knows what direction with some kid who used their older sibling's log in." * "Parents today are afraid of books in libraries that kids don't read. They ain't seen nothing yet," he wrote. Cuban said he doesn't think OpenAI will "age gate successfully enough" and it's unclear if LLMs can be addictive or "psychologically damage" young people. * "Which in my OPINION, means that parents and schools, that would otherwise want to use ChatGPT because of its current ubiquity, will decide not to use it." * "It will be an ongoing battle for OpenAI," he added. Cuban, OpenAI and the Sora app The criticisms against OpenAI come as Cuban has been one of the faces of OpenAI's video platform app Sora, allowing users to make deepfake videos of him using a cameo he recorded. * His cameo includes a small plug for his company Cost Plus Drugs, which means the videos double as a promotional tool. (However, some videos of Cuban have removed that plug, including one where he points to a child who he refers to as someone's "boyfriend," based on a user's prompt.) * "I was just curious what people would come up with, and I wanted to experiment with the plug for costplusdrugs.com. So far it's worked out great," he told Business Insider. Mark Cuban's thoughts on AI Zoom out: Cuban supports AI as a tool for young people, telling Axios earlier this month that it levels the playing field for aspiring professionals and low-income entrepreneurs, * "Right now, if you're a 14- to 18-year-old and you're in not so good circumstances, you have access to the best professors and the best consultants." * "It allows people who otherwise would not have access to any resources to have access to the best resources in real time. You can compete with anybody." How pornography and AI intersect Pornography has historically fueled tech growth, including the rise of digital video and gaming, Axios' Ina Fried writes. * AI-generated porn has already made waves. Elon Musk's Grok has offered sexual role-play companions. * Meta, meanwhile, faced backlash after reports suggested its chatbots engaged in sexual conversations with minors.
[17]
ChatGPT will get more 'friendly,' even erotic, Sam Altman says
ChatGPT might soon get a lot "friendlier" with users -- literally and figuratively. According to OpenAI CEO Sam Altman, the company plans to introduce age-gating to the LLM, marking a clear distinction between how minors and adults will interact with the chatbot. The update stems from a post Altman shared on X Tuesday, where he addressed growing frustration over the GPT-5 update over the summer, which resulted in ChatGPT "losing" its "personality," according to many user complaints. In the post, Altman said he believes OpenAI has "mitigated serious mental health issues" associated with AI chatbot use, and that the company is now exploring ways to ease some of its tighter content limits. OpenAI recently introduced new safety tools as well, including new parental controls. This Tweet is currently unavailable. It might be loading or has been removed. Altman announced that OpenAI will roll out a new version of ChatGPT in the coming weeks that "behaves more like what people liked about [previous GPT version] 4o." Then, starting in December, as OpenAI introduces "age-gating" features, restrictions around erotic or sexually explicit conversations will be lifted "for verified adult users," Altman wrote. It's unclear how OpenAI will prove users' ages. This is a notable shift for OpenAI, especially given that the company intentionally designed GPT-5 to make the chatbot less "sycophantic" and help prevent potential mental health crises among users. Now, Altman says he wants to "treat adult users like adults" and seems more open to allowing erotica or sexual expression within the chatbot experience for those who want it. Altman's post quickly drew mixed reactions. Some users expressed frustration that instead of simply bringing back the beloved GPT-4o, OpenAI was releasing an update that would merely "behave like 4o." This Tweet is currently unavailable. It might be loading or has been removed. This Tweet is currently unavailable. It might be loading or has been removed. Others mocked the CEO for seemingly walking back his August comments, when he said he was proud that OpenAI "hasn't put an AI sex bot into ChatGPT yet." And then there were those who pointed out the irony of Altman's newfound openness to erotic chat, given his recent prediction that ChatGPT could "cure cancer one day." Mashable reached out to OpenAI for comment but didn't receive a response before publication.
[18]
OpenAI will allow verified adults to use ChatGPT to generate erotic content
New version will allow users to customize AI assistant's personality in what firm calls 'treat adults users like adults' policy OpenAI announced plans on Tuesday to relax restrictions on its ChatGPT chatbot, including allowing erotic content for verified adult users as part of what the company calls a "treat adult users like adults" principle. OpenAI's plan includes the release of an updated version of ChatGPT that will allow users to customize their AI assistant's personality, including options for more human-like responses, heavy emoji use, or friend-like behavior. The most significant change will come in December, when OpenAI plans to roll out more comprehensive age-gating that would permit erotic content for adults who have verified their ages. OpenAI did not immediately provide details on its age verification methods or additional safeguards planned for adult content. The company launched a dedicated ChatGPT experience for under-18 users in September, with automatic redirection to age-appropriate content that blocks graphic and sexual material. It also said it was developing behavior-based age prediction technology that estimates whether a user is over or under 18 based on how they interact with ChatGPT. In a post on X, Sam Altman, the CEO of OpenAI, said that stricter guardrails on conversational AI to address mental health concerns had made its chatbot "less useful/enjoyable to many users who had no mental health problems". The stricter safety controls came after Adam Raine, a California teenager, died by suicide earlier this year, with his parents filing a lawsuit in August claiming ChatGPT provided him with specific advice on how to kill himself. Just two months later, Altman said the company has "been able to mitigate the serious mental health issues". The US Federal Trade Commission had also launched an inquiry into several tech companies, including OpenAI, over how AI chatbots potentially negatively affect children and teenagers. "Given the seriousness of the issue we wanted to get this right," Altman said Tuesday, arguing that OpenAI's new safety tools now allow the company to ease restrictions while still addressing serious mental health risks.
[19]
Sam Altman wants to 'treat adults like adults' -- but can OpenAI keep ChatGPT safe after opening the door to erotica? | Fortune
OpenAI chief executive Sam Altman has announced that ChatGPT will soon be able to generate erotica for verified adult users. The move, framed by Altman as an effort to "treat adult users like adults," comes as the company faces scrutiny over the way its AI bot can affect users' mental health amid the rise of so-called "AI psychosis" cases. It also follows in the footsteps of rival companies, including Elon Musk's xAI, which have already introduced more sexually explicit chatbot "companions" to their platforms. There is clearly a large demand for AI chatbots that are capable of behaving in romantic or sexual ways. An April survey of 6,000 regular AI users by the Harvard Business Review found that "companionship and therapy" was the most common use case. Another study from Ark Invest found that adult-focused AI platforms made significant gains last year, capturing 14.5% of the market previously dominated by OnlyFans, up from just 1.5% the year before. Popular alternatives that market themselves as companion AI chatbots, such as Character.ai and Replika, also speak to this growing demand from users. Earlier this year, xAI introduced "companion mode" for its chatbot Grok, a feature that lets users engage with various characters, including a highly sexualized anime persona called "Ani." "OpenAI is stuck between a bit of a rock and a hard place, because I think they have seen a strong demand signal from users," Jessica Li, a senior research analyst at Georgetown's Center for Security and Emerging Technology, told Fortune. "In terms of the relationships that people are having with models...erotic content or adult content would also fall under this bucket of emotional engagement with the models." The move could be an attempt by OpenAI to "straddle the line" between keeping the market share they already have by promising opt-in content moderation for NSFW content, Li said, while also seeing if they can capture other users from more specialized or niche services like Replika. "Despite some of the narratives around building artificial general intelligence that will supercharge the economy, OpenAI is still trying to operate as a technology platform, and somewhat like a social media company," Li said. "There's an interesting tension between the narratives that are being sold to investors and politicians... versus the things that are actually happening in the market." OpenAI's foray into adult content has drawn criticism from child safety advocates and notable industry figures concerned about erotica reaching younger users, despite age verifications. Earlier this week, the entrepreneur and TV personality Mark Cuban said OpenAI's plan could "backfire hard," and argued that parents will not trust OpenAI's age filters to keep children away from explicit material. In the US, the Federal Trade Commission has already opened an inquiry into how AI chatbots interact with minors, and state lawmakers are considering tighter rules around digital companions and sexualized AI content. Jenny Kim, a partner at the law firm Boies Schiller Flexner, told the BBC that OpenAI is "using people like guinea pigs," and questioned how the company would prevent children from accessing adult material on the platform. Reached for comment, OpenAI said that the company was building an age prediction system to understand whether someone is over or under 18. They added that if a user's age could not be confidently confirmed, the chatbot would take the safer route and default to the under-18 experience, while giving adults ways to prove their age to unlock adult capabilities. Reacting to some of the backlash on X, Altman said that the announcement had blown "up on the erotica point much more than I thought it was going to." He emphasized that the change was "just one example of us allowing more user freedom for adults," not a retreat from safety measures or guardrails around mental health. "We are not the elected moral police of the world," he said, adding that ChatGPT would continue to "prioritize safety over privacy and freedom for teenagers" while giving adults more autonomy. OpenAI has also been reckoning with an unexpected wave of backlash following its decision to replace the version of ChatGPT powered by GPT-4o with its newer GPT-5 model. Users revolted against the change, citing lost AI friendships and romantic relationships with the earlier iteration of the bot. One petition to keep the earlier version of the bot gathered almost six thousand signatories. "For many of us, GPT-4o offers a unique and irreplaceable user experience, combining qualities and capabilities that we value, regardless of performance benchmarks," the petitioners wrote in the change.org campaign to keep GPT-4o. "We continue to benefit from GPT-4o in ways that are distinct and meaningful." While OpenAI eventually restored the earlier version, those in the #keep4o movement have since told Fortune they were worried about the company routing users to GPT-5, "without consent or notification." According to Li, the announcement from OpenAI could be trying to signal something to these users: "The very public announcement of it does make me think that they're trying to signal something to users who are demanding this thing -- like, 'We hear you. We're responding to your desires.'" The shift is also not entirely new; it builds upon a quieter update to OpenAI's Model Spec earlier this year. In February, OpenAI updated the document to relax the rules around sexual and violent content in what it called a move away from "AI paternalism." Updated guidelines at the time permitted the generation of written erotica and other sensitive material in appropriate contexts. OpenAI also told Fortune that the announcement was part of its plan to build on the latest release of the Model Spec, while maintaining boundaries against harmful uses like deepfakes.
[20]
Sam Altman Pushes Back on OpenAI's Foray Into Smut
OpenAI raised eyebrows this month at its annual DevDay event when it announced that it will move to allow "mature apps" on its platforms. "To maximize freedom for our users, only sexual content involving minors is considered prohibited," reads an updated company document about what will be allowed, suggesting wide latitude for developers to use the company's platform to craft naughty experiences for users. As observers quickly pointed out, it was a pretty astonishing reversal for the company. Just two months ago, its CEO Sam Altman had boasted on a podcast that OpenAI hadn't "put a sexbot avatar in ChatGPT yet" -- even though, he conceded at the time, doing so would be sure to boost engagement. Adult-oriented content has always been a large online sector, but mainstream tech companies have tended to keep it at arm's length. Engaging with it requires that a company take positions on complex questions about moderation, ethics and agency that will never make everybody happy -- and that's more true than ever in the world of AI, where the core premise is that platforms can provide a near-infinite range of potentially controversial outputs in response users' prompts. Now, it seems like Altman is learning that reality in real time. "Ok this tweet about upcoming changes to ChatGPT blew up on the erotica point much more than I thought it was going to!" he wrote in a lengthy response to the drama. "It was meant to be just one example of us allowing more user freedom for adults." Some of his new commitments were milquetoast, like promising that adult capabilities would be restricted to adult users. (Whether it will be able to prevent minor users from signing up as adults is an untested question.) "As we have said earlier, we are making a decision to prioritize safety over privacy and freedom for teenagers," he continued. "And we are not loosening any policies related to mental health. This is a new and powerful technology, and we believe minors need significant protection." That last part was clearly addressing the storm of criticism OpenAI is facing over a wave of cases in which ChatGPT has driven users into severe mental health crises that have ended in involuntary commitment, suicide, and murder. "We also care very much about the principle of treating adult users like adults," he wrote in the post. "As AI becomes more important in people's lives, allowing a lot of freedom for people to use AI in the ways that they want is an important part of our mission." It's a strikingly forceful reply. Even as those tragedies result in lawsuits and legislation aimed at the AI industry, Altman is laying down a sweepingly libertarian vision for OpenAI: that as long as non-minor users aren't doing anything outright harmful, the company is going to be hands-off about moderating their usage of its products. "It doesn't apply across the board of course: for example, we will still not allow things that cause harm to others, and we will treat users who are having mental health crises very different from users who are not," he wrote. "Without being paternalistic we will attempt to help users achieve their long-term goals." It'll be fascinating to see how this all plays out in practice. But if one thing's clear, it's that Altman wants the best of both worlds: maximum freedom to provide what users want, with as little responsibility as possible for mediating what form those uses can take. "But we are not the elected moral police of the world," Altman wrote. "In the same way that society differentiates other appropriate boundaries (R-rated movies, for example) we want to do a similar thing here."
[21]
ChatGPT's makeover to bring back charm, chat, and a little chaos
Say goodbye to robotic replies: OpenAI plans to reintroduce warm, customizable personalities in ChatGPT. Also, for verified adults, unlock a more "grown-up mode". What's happened? OpenAI is preparing to restore a more expressive, "friendly" personality layer to ChatGPT after user feedback suggested that recent versions, like ChatGPT 5, have felt cold and overly safe. Alongside that, the company plans to roll out a verified adult mode in December, which will permit erotically themed conversations for users who meet age requirements. Altman says new internal tools allow stronger mental health safeguards, clearing the way to safely relax restrictions in most cases. The adult mode will require user verification and won't be open to everyone. The update reportedly includes customizable personality settings, letting users choose how warm, formal, or casual the AI sounds. This comes after critics argued that GPT-4o was too "charming", but ChatGPT 5 lost its personality entirely. Why this is important: Chatbots like ChatGPT are no longer just tools. In fact, for many, they act as companions. The personality of the AI shapes how users relate, trust, and feel when using it. Bringing back warmth isn't just nostalgia; it's about restoring emotional connection. Additionally, allowing erotically themed content also acknowledges that adult users expect more nuanced and mature conversational possibilities. That said, how OpenAI implements age verification will set a precedent for future AI systems that distinguish between minors and adults. If the mechanics are sloppy or opaque, users may be forced to hand over sensitive IDs, biometric data, or face scans, all to "prove" their age. And that raises obvious risks: identity theft, data leaks, or surveillance misuse. Why should I care? This update is about more than giving ChatGPT a personality makeover, but also about how much control users will have over their AI experience versus how much they'll have to give up to access it. As such, a friendlier, more expressive chatbot might sound great to some. However, the price of entry, which is handing over personal data for verification, could make some users think twice. If you've felt ChatGPT lose its warmth, the personality revival could make conversations feel more natural again. Adult creators and roleplayers may gain access to richer expressive modes, where romantic, sensual, or mature storytelling can happen (within limits). The verification bar, however, might exclude or discourage users unwilling or unable to share personal identification. Recommended Videos Okay, so what's next? For now, users can expect a phased rollout: first with personality presets and customization tools, then adult-mode access for verified users by December. OpenAI will likely monitor usage closely, adjusting safety policies based on real-world feedback. And if this works out, we might see competing AI platforms follow suit with their own "moods" or "grown-up modes".
[22]
OpenAI to ease ChatGPT restrictions, allowing adult content for verified adults
OpenAI announced plans on Tuesday to relax restrictions on its ChatGPT chatbot, including allowing erotic content for verified adult users as part of what the company calls a "treat adult users like adults" principle. In a post on X, CEO Sam Altman said that stricter guardrails on conversational AI to address mental health concerns had made its chatbot "less useful/enjoyable to many users who had no mental health problems." The stricter safety controls came after California teenager Adam Raine died by suicide earlier this year, with his parents filing a lawsuit claiming ChatGPT provided him with specific advice on how to kill himself. The US Federal Trade Commission also launched an inquiry into several tech companies including OpenAI over how AI chatbots potentially negatively affect children and teenagers. "Given the seriousness of the issue we wanted to get this right," Altman said Tuesday, arguing that OpenAI's new safety tools now allow the company to ease restrictions while still addressing serious mental health risks. OpenAI's plan includes the release of an updated version of ChatGPT that will allow users to customize their AI assistant's personality, including options for more human-like responses, heavy emoji use, or friend-like behavior. The most significant change will come in December, when OpenAI plans to roll out more comprehensive age-gating that would permit erotic content for verified adults. OpenAI did not immediately provide details on its age verification methods or additional safeguards planned for adult content. The company launched a dedicated ChatGPT experience for under-18 users in September, with automatic redirection to age-appropriate content that blocks graphic and sexual material. It also said it was developing behavior-based age prediction technology that estimates whether a user is over or under 18 based on how they interact with ChatGPT.
[23]
ChatGPT to allow AI porn for over-18s
ChatGPT is to start allow users to generate porn as it races to compete with a more explicit chatbot built by Elon Musk. Sam Altman, the boss of ChatGPT-owner OpenAI, said on Tuesday that it will begin allowing users who have verified they are adults to access "erotica". "Now that we have been able to mitigate the serious mental health issues and have new tools, we are going to be able to safely relax the restrictions in most cases," Mr Altman wrote on X. "As part of our 'treat adult users like adults' principle, we will allow even more, like erotica for verified adults." The changes are set to come into effect in December as ChatGPT "roll[s] out age-gating more fully". Further details of how ages will be checked and what kinds of erotica will be allowed were not given. The plans come after just months after Mr Musk's AI, Grok, began offering chatbot girlfriends that can engage in sexual conversation. Called Ani, the anime-style bot garnered controversy for the fact it was available to 12-year-olds. Users discovered the blonde avatar has an "NSFW" mode - internet slang for "not safe for work" - and can appear dressed in lingerie after a certain number of conversations. Erotica has long been banned on OpenAI but the company began exploring whether to offer it last year. Joanne Jang of OpenAI told US broadcaster NPR that the company wasn't "trying now to create AI porn" but added that what might be allowed "depends on your definition of porn." Ms Jang said deepfakes would always be banned on the platform. Mr Musk and Mr Altman compete intensely against each other in the AI space. The pair founded OpenAI together in 2015 but Mr Musk left the start-up in 2018 amid a disagreement over its direction. He has since sued the company over plans to turn it into a for-profit business. In 2023, Mr Musk launched Grok to compete with ChatGPT. The chatbot is accessible through X, which the billionaire also owns, and a standalone app. The Grok AI bot had previously suffered a series of controversies after it was found to make anti-Semitic remarks, as well as looking up Mr Musk's personal opinions before providing answers on controversial topics. The company said later that both issues had been fixed. Before launching age-gated erotica, OpenAI will launch a "new version" of ChatGPT that will display "more personality", such as responding in a "very human-like way" or using "a ton of emoji". The changes may prove controversial. Campaigners have warned that an increasing number of children are using AI bots for companionship. In July, research showed that one in eight children said they use the bots because they have nobody else to speak to.
[24]
OpenAI Forms Well-Being Council as Altman Eases ChatGPT Adult Content Restrictions - Decrypt
The moves come amid lawsuits and criticism over AI's role in youth mental health crises. OpenAI announced Tuesday that it has formed an Expert Council on Well-Being and AI, a group of academics and nonprofit leaders tasked with helping the company navigate how its products affect mental health. "Their role is to advise us, pose questions, and help define what healthy interactions with AI should look like for all ages," the firm said in a statement. The eight-member panel includes researchers from Harvard, Stanford, Oxford, and other institutions, who will help shape guidelines. "Our work with the council will include regular check-ins on our approach, and recurring meetings to explore topics like how AI should behave in complex or sensitive situations and what kinds of guardrails can best support people using ChatGPT," OpenAI added. OpenAI and other AI companies have faced increasing pressure over how the technology influences users, particularly children. The company, along with others in the industry, have faced lawsuits from parents alleging that AI conversations contributed to teen suicides, prompting OpenAI to introduce parental controls earlier this year. Others have blamed dependence on the chatbots for breakdowns in relationships and increased isolation. The creation of the well-being council following public pressure highlights an ongoing issue in the tech sector where companies only confront the psychological and ethical consequences of products once they are already in mass circulation. Critics say it's a familiar cycle of innovation first, accountability later. "This seems part of the usual pattern of move fast, break things, and try to fix some things after they get embarrassing," a spokesperson for NGO AlgorithmWatch told Decrypt. AlgorithmWatch also questioned how independent the new council would be, noting OpenAI's history of internal power struggles. "We should remember that when the previous OpenAI board tried to have an impact by expressing their distrust in Sam Altman, they were all removed and replaced," they added. AlgorithmWatch suggested that a ""slightly better (but still limited) precedent" might be the Meta Oversight Board, noting that while their recommendations are "often very slow and frequently ignored," the board provides "clear recommendations and public enough that people can see what Meta is ignoring." Shady El Damaty, co-founder of Holonym and a digital rights advocate, told Decrypt he found it "ironic" that the same companies racing to deploy the most powerful AI tools are now positioning themselves as "moral referees." "But these conversations are urgent and overdue, so I won't knock the existence of the council. If anything, I hope it raises the floor for everyone," he added. In addition to wellbeing, he said he'd like to see the Council address issues around privacy and identity too. "At a minimum, the council should establish transparent and public metrics for measuring AI's emotional impact and mandate regular, independent audits," he said. "But we really need hardened rules and regulations that protect our digital rights, and we need them sooner rather than later." OpenAI's Expert Council "has a chance to go deeper than safety," he added. "They should be asking: What rights do people have in digital spaces? Who owns their identity, their behavior, their likeness? What does human-first design actually look like... not just for kids, but for everyone?" The company's renewed focus on well-being also coincided with CEO Sam Altman's announcement that OpenAI will also begin relaxing restrictions, including on adult content, come December. "We made ChatGPT pretty restrictive to make sure we were being careful with mental health issues," Altman tweeted on the same day. "We realize this made it less useful/enjoyable to many users who had no mental health problems, but given the seriousness of the issue we wanted to get this right." Altman said verified adults will soon be allowed to create erotica using ChatGPT, describing it as part of a broader principle to "treat adult users like adults."
[25]
Deus sex machina: What are the consequences of turning ChatGPT into a sex-line?
OpenAI founder Sam Altman has announced that ChatGPT will from December be able to engage in erotic conversations with its users. It's a decision with barely-disguised commercial motives - and one that poses worrying questions about the ethics of sexualising generative AI. Would you use ChatGPT as a sex-line? The AI chatbot created by Sam Altman and the team at OpenAI is about to grow up and experience its first flush of erotic temptation. Citing what he described as the company's "treat adults like adults" principle, the OpenAI founder said on social media Tuesday that one of the coming changes to the chatbot would be allowing it to produce erotic content as of December - though only, he stressed, to verified adults. The next goose that laid the golden egg? "It's pretty minimalist as an announcement, but it seems that this will only apply to written text," said Sven Nyholm, a specialist in AI-related ethics. To put it another way, OpenAI doesn't seem ready - yet - to ask its star chatbot to generate risqué images or videos. Even restricted to written erotica, ChatGPT will be the first major chatbot to dip its digital toe into sexualised content. The other large-language models - Perplexity, Claude and Google's Gemini - refuse for the moment to take the plunge. "That's not allowed," Perplexity said in response to FRANCE 24's attempt to take the conversation in a more adult direction. "On the other hand, it is entirely possible to approach the subject of eroticism or sexuality from an educational or psychological perspective." But ChatGPT won't be the only player in this fledgling field. A number of niche chatbots have already set foot on this slippery terrain, such as the paid version of Replika, an AI-based service that creates artificial companions for users. Read moreWill sexually explicit AI girlfriends take over our love lives? Papers on love and AI For a number of experts approached by FRANCE 24, the arrival of sexual content in generative AI had always been just a matter of time. "There's this mentality in Silicon Valley that every problem has a technological solution," Nyholm said. "And Mark Zuckerberg, the head of Meta, had suggested that one way to respond to the world's 'loneliness epidemic' was to create emotional chatbots." And doesn't the internet's infamous Rule 34 - a cultural reference spawned in the depths of 4Chan's forums - decree that if something exists, there is porn of it? "There are two driving forces for the development of new technology," Nyholm said. "Military applications, and pornography." Ever the businessman, Altman seems to have decided that the best thing to do is to be the first one out of the gate. "It's clearly marketing above all," said British computer scientist Kate Devlin, a specialist in human-machine interactions at King's College London and the author of the book "Turned On: Science, Sex and Robots". "He knows how to say what he thinks the public wants to hear. Sam Altman saw that people were trying to get around the restrictions on Apple's Siri or Amazon's Alexa to have these kinds of conversations, and he figured there might be money to be made." "It's very likely an attempt to capture this public and bring more users to their platform," said Simon Thorne, an AI specialist at the University of Cardiff. "It remains to be seen how OpenAI plans to monetise this erotic option. The most obvious approach, of course, would be to charge users for the ability to engage in such conversations." A paid "premium" version would indeed be tempting for OpenAI, considering the fact that pornography has been proven to be potentially addictive, Devlin said. Another option could be a tiered system, with low-cost access to the chatbot's tamest version and higher fees demanded from users wanting to take their conversations to more sexually explicit heights. A series of scandals Altman has already been on the receiving end of a cascade of criticism following his announcement. "We are not the elected moral police of the world," he wrote in an X post defending his decision. "In the same way that society differentiates other appropriate boundaries (R-rated movies, for example) we want to do a similar thing here." Altman's push to take his chatbot in a steamier direction comes during a period of mounting controversies around the at-times toxic "relationships" between AIs and their users. The parents of a teenager who took his own life earlier this year sued OpenAI in August, saying that ChatGPT had openly encouraged their son's suicidal urges. Another user, a 47-year-old from Canada, apparently became convinced that he was a mathematical genius sent to save humanity after just three weeks of exchanges with the chatbot. "This is the main problem with these sex-bots," Devlin said. "What are the impacts on people who are already vulnerable?" OpenAI has pledged to put guardrails in place to avoid these abuses. For Thorne, these promised protections appear meagre in the face of widely used "jailbreaking" practices, where users are able to trick chatbots into generating responses normally prohibited by their programming. "We know that it is often possible to circumvent the limits set by these chatbots' creators," Thorne said. "When it comes to erotic discussions, this can lead to the creation of problematic or even illegal content." Experts told FRANCE 24 that they were also not convinced that a private corporation being made the arbiter of what constitutes sexual content was acceptable. "Given that laws on what is and is not permitted often vary from country to country, it will be very difficult for OpenAI to lay down general rules," Thorne said. Devlin warned that the US-based startup could be tempted to play it safe by limiting ChatGPT's definition of acceptable erotic content as much as possible. "In the US, for example, there is currently a very strong conservative shift that is rolling back women's rights and seeking to limit the LGBT community's visibility," she said. "What will happen if ChatGPT incorporates these biases?" Sexbots + incels = trouble And while sexualised content would remain - in theory - restricted to adults, the impact of generative AI on a new generation growing up alongside the technology could still be severe. "A recent UK study showed that young people are more and more likely to consider chatbots as real people whose statements are credible," Thorne said. A generation that, once grown up, could be led to believe ChatGPT if it tells them, for example, that it's not acceptable to have a same-sex erotic exchange. Another risk could come from chatbots' famously sycophantic approach to their users. "They're often configured based on the model of client service call centres that offer very friendly and cooperative interactions," Thorne said. "Besides this, the creators of these AIs want to make their users happy so that they continue to use their product." Nyholm said that it was a worrying approach when it comes to sexual matters. "Let's take for example the 'incel' movement, these young men who are sexually frustrated and complain about women," he said. "If a chatbot always goes along with them to keep them satisfied, it risks reinforcing their belief that women should act the same way." But even though Devlin recognises a "major risk", she argues that this supportive side of sex-bots could be a boon for heterosexual women alienated by an online world that can feel more and more hostile. "In an increasingly toxic digital environment, it could be more sexually fulfilling to have an erotic interaction with an AI instead of real people who could harass you online," she said. But even if these chats could have positive effects, do we really want to deliver our most intimate erotic fantasies into the hands of an AI controlled by an American multinational? "Many people don't realise that the data that they enter into ChatGPT is sent to OpenAI," Devlin said. If Altman succeeds in taking over this growing industry, OpenAI would possess "without doubt the largest amount of data on people's erotic preferences", Thorne said. It's a question that users should probably keep in mind before launching into a lascivious back-and-forth with their ever-submissive sex-bot.
[26]
How sexy should AI be? OpenAI is one of many firms hoping to cash in
Sex is a big market for the AI industry. OpenAI's ChatGPT won't be the first to try to profit from it. ChatGPT will be able to have kinkier conversations after OpenAI CEO Sam Altman announced the artificial intelligence (AI) company will soon allow its chatbot to engage in "erotica" for verified adults. OpenAI won't be the first to try to profit from sexualised AI. Sexual content was a top draw for AI tools almost as soon as the boom in AI-generated imagery and words erupted in 2022. But the companies that were early to embrace mature AI also encountered legal and societal minefields and harmful abuse as a growing number of people have turned to the technology for companionship or titillation. Will a sexier ChatGPT be different? After three years of largely banning mature content, Altman said Wednesday that his company is "not the elected moral police of the world" and ready to allow "more user freedom for adults" at the same time as it sets new limits for teens. "In the same way that society differentiates other appropriate boundaries (R-rated movies, for example) we want to do a similar thing here," Altman wrote on social media platform X, whose owner, Elon Musk, has also introduced an animated AI characterthat flirts with paid subscribers. For now, unlike Musk's Grok chatbot, paid subscriptions to ChatGPT are mostly pitched for professional use. But letting the chatbot become a friend or romantic partner could be another way for the world's most valuable startup, which is losing more money than it makes, to turn a profit that could justify its $500 billion (€427 billion) valuation. "They're not really earning much through subscriptions, so having erotic content will bring them quick money," said Zilan Qian, a fellow at Oxford University's China Policy Lab who has studied the popularity of dating-based chatbots in the United States and China. There are already about 29 million active users of AI chatbots designed specifically for romantic or sexual bonding, and that's not counting people who use conventional chatbots in that way, according to research published by Qian earlier this month. It also doesn't include users of Character.AI, which is fighting a lawsuit that alleges a chatbot modeled after "Game of Thrones" character Daenerys Targaryen formed a sexually abusive relationship with a 14-year-old boy and pushed him to kill himself. OpenAI is also facing a lawsuit from the family of a 16-year-old ChatGPT user who died by suicide in April. Qian said she worries about the toll on real-world relationships when mainstream chatbots, already prone to sycophancy, are primed for 24-hour availability serving sexually explicit content. "ChatGPT has voice chat versions. I would expect that in the future, if they were to go down this way -- voice, text, visual -- it's all there," she said. In love with a chatbot Humans who fall in love with human-like machines have long been a literary cautionary tale, from popular science fiction of the last century to the ancient Greek legend of Pygmalion, obsessed with a woman he sculpted from ivory. Creating such a machine would seem like an unusual detour for OpenAI, founded a decade ago as a nonprofit dedicated to safely building better-than-human AI. Altman said on a podcast in August that OpenAI has tried to resist the temptation to introduce products that could "juice growth or revenue" but be "very misaligned" with its long-term mission. Asked for a specific example, he gave one: "Well, we haven't put a sexbot avatar in ChatGPT yet". US-based startup Civitai, a platform for AI-generated art, learned the hard way that making money off mature AI won't be an easy path. "When we launched the site, it was an intentional choice to allow mature content," said Justin Maier, the company's co-founder and CEO, in an interview last year. Backed by the prominent venture capital firm Andreessen Horowitz, which has also invested in OpenAI, the startup was one of several that tried to capitalise on the sudden popularity of tools like Stable Diffusion and Midjourney, which enabled people to type a description and conjure up almost any kind of image. Part of Stable Diffusion's initial popularity was the ease with which it could generate a new kind of synthetic and highly customised pornography. "What we had seen was that there was a lot of interest in mature content," Maier said. Training these AI systems, known as models, on "mature themes actually made it so that these models were more capable of human anatomy and resulted in actually better models," he said. "We didn't want to prevent the kind of growth that actually increased everything for the entire community, whether you were interested in mature content or Pixar," Maier said, adding that "we wanted to ultimately give the control to the user to decide what they would see on the site and what their experience would be". High risk of abuse The laissez-faire approach has also invited abuse. Civitai last year implemented new measures to detect and remove sexual images depicting children, but it remained a hub for AI-generated pornography, including fake images of celebrities. Confronting increasing pressure, including from payment processors and a new law against nonconsensual images signed by US President Donald Trump, Civitai earlier this year blocked users from creating deepfake images of real people. Engagement dropped. Another company that hasn't shied away from mature content is US-based Nomi, though its founder and CEO Alex Cardinell said its companion chatbots are "strictly" for users over 18 and were never marketed to kids. They are also not designed for sex, though Cardinell said in an interview earlier this year that people who build platonic relationships with their chatbot might find it veering into a romantic one. "It's kind of very user-dependent for where they're kind of missing the human gap in their life. And I think that's different for everyone," he said. He declined to guess how many Nomi users are having erotic conversations with the chatbot, comparing it to real-life partners who might do "mature content things" for some part of their lives but "all sorts of other stuff together as well". "We're not monitoring user conversations like that," Cardinell said.
[27]
ChatGPT is getting 'erotica for verified adults' in December: Sam Altman claims mental health concerns have been addressed, so now it's time to 'safely relax the restrictions in most cases'
The "glazing" personalities in the GPT 4o update are coming back too, though Altman hopes they will be better this time around. Six months after admitting that ChatGPT 4o was a little too enthusiastic with its relentless praise for users, Sam Altman is bringing sexy back. The OpenAI boss says a new version of ChatGPT "that behaves more like what people liked about 4o" is coming in a few weeks, and it'll get even better -- or potentially much worse, depending on how you feeling about the idea -- in December with the introduction of AI-powered "erotica." "We made ChatGPT pretty restrictive to make sure we were being careful with mental health issues," Altman wrote on X. "We realize this made it less useful/enjoyable to many users who had no mental health problems, but given the seriousness of the issue we wanted to get this right. Now that we have been able to mitigate the serious mental health issues and have new tools, we are going to be able to safely relax the restrictions in most cases." It's arguable that OpenAI has been anything but "careful" with the mental health impacts of its chatbots. The company was sued in August by the parents of a teenager who died of suicide after allegedly being encouraged and instructed on how to do so by ChatGPT. The following month, Altman said the software would be trained not to talk to teens about suicide or self-harm (possibly leading one to wonder why it took a lawsuit over a teen suicide to spark such a change), or to engage them in "flirtatious talk." At the same time, Altman said OpenAI aims to "treat our adult users like adults," and that's seemingly where this forthcoming new version comes in, as Altman repeated the phrase in today's message. "In a few weeks, we plan to put out a new version of ChatGPT that allows people to have a personality that behaves more like what people liked about 4o (we hope it will be better!)," Altman continued. 'If you want your ChatGPT to respond in a very human-like way, or use a ton of emoji, or act like a friend, ChatGPT should do it (but only if you want it, not because we are usage-maxxing)." And then, so to speak, the money shot: "In December, as we roll out age-gating more fully and as part of our 'treat adult users like adults' principle, we will allow even more, like erotica for verified adults." I'm generally of the opinion that adults should be allowed to do what they want as long as nobody's being hurt, but in this case I have to admit to certain concerns. Because as long as nobody's being hurt is, for the moment at least, the big question here: "AI psychosis," in which people form obsessive or otherwise unhealthy connections to chatbots, or even come to believe the software is actually sentient, is not a clinical designation but it does seem to be a growing problem, to the point that Altman himself recently acknowledged that some people use it "in self-destructive ways." In one particularly disturbing incident reported by Reuters, a cognitively impaired man died while attempting to meet what he believed was a real woman, but was in fact a Meta chatbot he'd been talking to on Facebook. Altman also said in the recent past, somewhat ironically as it turns out, that while some AI companies will opt to make "Japanese anime sex bots" -- presumably a dig at Elon Musk -- OpenAI would not, in part to avoid the risk that "people who have really fragile mental states get exploited accidentally." So there has been explicit acknowledgement of the potential risk of misuse or overuse of chatbots, and in light of that -- and more generally, the fact that this technology is still in its infancy -- I do wonder about the wisdom of turning them into always-on phone sex machines. (You can call it "erotica" if you like, but it is what it is.) On the other hand, OpenAI needs money -- lots and lots and lots of money -- and nobody ever went broke selling sex.
[28]
Sam Altman promises "new version of ChatGPT" that behaves more like GPT‑4o -- plus a new adult mode
Verified adult users will be able to access erotica and other mature content through a new "adult mode" OpenAI CEO Sam Altman has announced that ChatGPT is about to be more like its old self, and, depending on your settings, a bit more risqué Altman announced on X that the company plans to roll out a "new version of ChatGPT" that behaves more like an older version of ChatGPT, specifically the GPT-4o model that many people pined for when GPT-5 first replaced it. That includes offering options for bigger, friendlier, and more emoji-filled personalities, as well as a new "mature content" mode for verified adult users. The announcement comes amid rising tensions between user demand for both more flexible AI and more enforced safety and ethical standards. OpenAI now believes it has developed better safeguards around mental health and is ready to ease some of the restrictions. The incoming update aims to strike a new balance, Altman explained. If you've ever found yourself thinking your AI assistant sounds more like corporate HR than a conversational partner, you're going to be thrilled. OpenAI's plan to begin allowing erotica generation for adult users who verify their age is a real bombshell, however. It's a policy reversal that will almost certainly be welcomed by some, while viewed with deep suspicion by critics who fear AI's potential for abuse. Currently, ChatGPT's content policy disallows erotica and depictions of extreme violence except in educational, medical, or historical contexts. Even within storytelling, users often hit moderation roadblocks. In December, OpenAI said it will complete its rollout of age-gating tools and flip the switch on "mature mode" for verified adults. Even users who have no interest in erotica might notice a difference in how ChatGPT behaves. The current default often feels intentionally bland. The shift to a more GPT-4o approach to personality might be a big deal to a lot of users. It will be an opt-in choice, though. If you want a more straight-laced AI, you can have that instead. It's quite a philosophical change for OpenAI compared to how the company was talking even a few weeks ago. That's when OpenAI showcased how ChatGPT would shunt young-seeming people into a model for teens with stricter filters and parent alerts. This tension, between freedom and safety, creativity and control, has haunted every generation of AI tools. But the shift happening now suggests OpenAI is ready to stop chasing a one-size-fits-all solution. Instead, it's inching toward customization: a ChatGPT that can be sensitive or sassy, PG or R-rated, depending on who's using it and how. This update is also a quiet pivot in how OpenAI frames ChatGPT's purpose. It was once primarily pitched for productivity, but now it's keen on seeming like a companion for every occasion. Still, more expressive, more permissive, more human ChatGPT might not expand the customer base for OpenAI as it faces stiffer competition from Google, Meta, and other AI developers.
[29]
Sam Altman: OpenAI will allow erotica for adult users
Why it matters: The move could boost OpenAI as it seeks to sign up consumers for paid subscriptions, but is also likely to increase pressure on lawmakers to enact meaningful regulations. Driving the news: Altman said in an X post that OpenAI plans to allow ChatGPT to have more personality and fulfill previously blocked requests. * "Now that we have been able to mitigate the serious mental health issues and have new tools, we are going to be able to safely relax the restrictions in most cases," Altman said. * "As part of our 'treat adult users like adults' principle, we will allow even more, like erotica for verified adults," he added, though that would come around December. The intrigue: The move comes as lawmakers are eyeing tighter restrictions on chatbot companions. * As first reported by Axios, Sen. Josh Hawley (R-Mo.) is circulating a draft bill that would ban AI companions for minors. What they're saying: "We made ChatGPT pretty restrictive to make sure we were being careful with mental health issues," Altman said in the post. "We realize this made it less useful/enjoyable to many users who had no mental health problems, but given the seriousness of the issue we wanted to get this right." * OpenAI was not immediately available for comment. The big picture: Historically, porn has helped drive the frontier of technology, from VCRs to digital video and interactive gaming. * Although adult entertainment hasn't been the driving force behind the leading AI efforts, there is already a significant cottage industry of AI-generated porn. * Elon Musk's Grok has been the most permissive of the large players, notably offering AI companions designed for sexual role play. * Meta has also come under fire after it was revealed that its chatbots -- including some that mimic specific celebrities -- were engaging in sexual banter with minors. What to watch: Altman said that in the coming weeks, people will be able to better dictate the tone and personality of ChatGPT.
[30]
Sex could become the next big business opportunity for AI companies
ChatGPT will be able to have kinkier conversations after OpenAI CEO Sam Altman announced the artificial intelligence company will soon allow its chatbot to engage in "erotica for verified adults." OpenAI won't be the first to try to profit from sexualized AI. Sexual content was a top draw for AI tools almost as soon as the boom in AI-generated imagery and words erupted in 2022. But the companies that were early to embrace mature AI also encountered legal and societal minefields and harmful abuse as a growing number of people have turned to the technology for companionship or titillation. Will a sexier ChatGPT be different? After three years of largely banning mature content, Altman said Wednesday that his company is "not the elected moral police of the world" and ready to allow "more user freedom for adults" at the same time as it sets new limits for teens.
[31]
Sam Altman says ChatGPT will become friendly again with a touch of erotica for adult users - SiliconANGLE
Sam Altman says ChatGPT will become friendly again with a touch of erotica for adult users OpenAI Chief Executive Sam Altman today said restrictions on the company's ChatGPT chatbot will be eased, with the incoming updated version to "treat adult users like adults" and give users the option to interact with an erotic-themed bot. The new update lets users shape the chatbot's personality, giving it a more natural, human feel. This change follows user feedback that ChatGPT had become less expressive and lost some of its original charm. "We made ChatGPT pretty restrictive to make sure we were being careful with mental health issues," Altman wrote on X. "We realize this made it less useful/enjoyable to many users who had no mental health problems, but given the seriousness of the issue we wanted to get this right." The company is currently being sued in a wrongful death lawsuit by the parents of a teenager who took his own life earlier in the year after ChatGPT had offered some questionable advice, and it seems become almost too human-sounding for the young man. While the onus for companies is to create the human-sounding AI, experts are warning that such technology can exacerbate mental health problems in the user. This was one of the reasons for California Governor Gavin Newsom introducing a slew of regulations this week around artificial intelligence chatbots. Altman says OpenAI has been able to "mitigate the serious mental health issues" and has "new tools" which has enabled the company to relax restrictions. The updated bot will be more similar to ChatGPT version 4o, which it seems people preferred to ChatGPT 5. The new bot will act in a more "human-like way, or use a ton of emoji, or act like a friend," but only if users ask for it. Altman added that this isn't a case of "usage-maxxing" - optimizing for engagement. This will include the erotic version of the bot - for adult users - although Altman didn't go into details. Elon Musk's Grok chatbot already has a similar function, with users able to have flirtatious and sexually explicit conversations with the xAI bot.
[32]
Dating chatbot expert: ChatGPT subscriptions aren't 'really earning much so having erotic content will bring them quick money' | Fortune
ChatGPT will be able to have kinkier conversations after OpenAI CEO Sam Altman announced the artificial intelligence company will soon allow its chatbot to engage in "erotica for verified adults." OpenAI won't be the first to try to profit from sexualized AI. Sexual content was a top draw for AI tools almost as soon as the boom in AI-generated imagery and words erupted in 2022. But the companies that were early to embrace mature AI also encountered legal and societal minefields and harmful abuse as a growing number of people have turned to the technology for companionship or titillation. Will a sexier ChatGPT be different? After three years of largely banning mature content, Altman said Wednesday that his company is "not the elected moral police of the world" and ready to allow "more user freedom for adults" at the same time as it sets new limits for teens. "In the same way that society differentiates other appropriate boundaries (R-rated movies, for example) we want to do a similar thing here," Altman wrote on social media platform X, whose owner, Elon Musk, has also introduced an animated AI character that flirts with paid subscribers. For now, unlike Musk's Grok chatbot, paid subscriptions to ChatGPT are mostly pitched for professional use. But letting the chatbot become a friend or romantic partner could be another way for the world's most valuable startup, which is losing more money than it makes, to turn a profit that could justify its $500 billion valuation. "They're not really earning much through subscriptions so having erotic content will bring them quick money," said Zilan Qian, a fellow at Oxford University's China Policy Lab who has studied the popularity of dating-based chatbots in the U.S. and China. There are already about 29 million active users of AI chatbots designed specifically for romantic or sexual bonding, and that's not counting people who use conventional chatbots in that way, according to research published by Qian earlier this month. It also doesn't include users of Character.AI, which is fighting a lawsuit that alleges a chatbot modeled after "Game of Thrones" character Daenerys Targaryen formed a sexually abusive relationship with a 14-year-old boy and pushed him to kill himself. OpenAI is facing a lawsuit from the family of a 16-year-old ChatGPT user who died by suicide in April. Qian said she worries about the toll on real-world relationships when mainstream chatbots, already prone to sycophancy, are primed for 24-hour availability serving sexually explicit content. "ChatGPT has voice chat versions. I would expect that in the future, if they were to go down this way -- voice, text, visual -- it's all there," she said. Humans who fall in love with human-like machines have long been a literary cautionary tale, from popular science fiction of the last century to the ancient Greek legend of Pygmalion, obsessed with a woman he sculpted from ivory. Creating such a machine would seem like an unusual detour for OpenAI, founded a decade ago as a nonprofit dedicated to safely building better-than-human AI. Altman said on a podcast in August that OpenAI has tried to resist the temptation to introduce products that could "juice growth or revenue" but be "very misaligned" with its long-term mission. Asked for a specific example, he gave one: "Well, we haven't put a sexbot avatar in ChatGPT yet." Idaho-based startup Civitai, a platform for AI-generated art, learned the hard way that making money off mature AI won't be an easy path. "When we launched the site, it was an intentional choice to allow mature content," said Justin Maier, the company's co-founder and CEO, in an interview last year. Backed by the prominent venture capital firm Andreessen Horowitz, which has also invested in OpenAI, the Idaho startup was one of several that tried to capitalize on the sudden popularity of tools like Stable Diffusion and Midjourney that enabled people to type a description and conjure up almost any kind of image. Part of Stable Diffusion's initial popularity was the ease with which it could generate a new kind of synthetic and highly customized pornography. "What we had seen was that there was a lot of interest in mature content," Maier said. Training these AI systems, known as models, on "mature themes actually made it so that these models were more capable of human anatomy and resulted in actually better models," he said. "We didn't want to prevent the kind of growth that actually increased everything for the entire community, whether you were interested in mature content or Pixar," Maier said. "So we allowed it early on and have always kind of had this battle of making it so that we can keep things filtered and safe, if that's not what you're interested in. We wanted to ultimately give the control to the user to decide what they would see on the site and what their experience would be." That also invited abuse. Civitai last year implemented new measures to detect and remove sexual images depicting children, but it remained a hub for AI-generated pornography, including fake images of celebrities. Confronting increasing pressure, including from credit card processors and a new law against nonconsensual images signed by President Donald Trump, Civitai earlier this year blocked users from creating deepfake images of real people. Engagement dropped. Another company that hasn't shied away from mature content is Baltimore-based Nomi.ai, though its founder and CEO Alex Cardinell said its companion chatbots are "strictly" for users over 18 and were never marketed to kids. They are also not designed for sex, though Cardinell said in an interview earlier this year that people who build platonic relationships with their chatbot might find it veering into a romantic one. "It's kind of very user-dependent for where they're kind of missing the human gap in their life. And I think that's different for everyone," he said. He declined to guess how many Nomi users are having erotic conversations with the chatbot, comparing it to real-life partners who might do "mature content things" for some part of their lives but "all sorts of other stuff together as well." "We're not monitoring user conversations like that," Cardinell said. Altman's announcement that erotica for adults could arrive on ChatGPT in December came a day after California Gov. Gavin Newsom vetoed legislation that would have banned companies from making AI chatbots available to anyone under 18 years old if it was "foreseeable" that they would engage in "erotic or sexually explicit interactions" with kids or encourage them to harm themselves. The tech industry lobbied heavily against the bill, which Newsom said was too broad, but OpenAI, Meta and others introduced new age restrictions and parental controls for AI-teen interactions.
[33]
Two Months Ago, Sam Altman Was Boasting That OpenAI Didn't Have to Do Sexbots. Now It's Doing Sexbots
AI companies are losing billions of dollars, and investors will likely have to wait for years to see a return -- if ever. In the scramble for profitability, OpenAI opened the door to adult chats, announcing at its DevDay 2025 conference earlier this month that it would soon allow "mature apps." "Support for mature (18+) experiences will arrive once appropriate age verification and controls are in place," the company's developer guidelines read. "As part of our 'treat adult users like adults' principle, we will allow even more, like erotica for verified adults," Altman tweeted on Tuesday, confirming once and for all that the ChatGPT maker would double down on sexbots. It's a remarkable change in the billionaire's tune. Just two months ago, Altman boasted in an interview with YouTuber and science communicator Cleo Abram that the company hadn't "put a sexbot avatar in ChatGPT yet." The broadside was seemingly aimed at competitor Musk, whose AI startup xAI introduced a lingerie- and corset-wearing anime girl avatar in July -- including a "full gooner mode." "That does seem like it would get time spent," Abram replied. "Apparently, it does," Altman added. Apart from serving as a handy reminder that we should never take tech executives at their word -- both Altman and Musk are well known for chronically overpromising and underdelivering -- the about-face also highlights how even some of the biggest players in the AI space are leaving no stones unturned in an effort to boost engagement numbers. OpenAI's latest change in policy shouldn't come as much of a surprise. The earliest days of modern, natural language model-based chatbots are filled with avatars designed to have sexually-charged conversations. Erotic roleplay has played a key role in the early dissemination of the tech. How OpenAI will cope with the potential of exploitation and spread of abusive material remains to be seen. Last month, twelve current and former xAI workers told Business Insider that they regularly encountered sexually explicit material, including AI-generated material involving the sexual abuse of children. Alongside "treating adult users like adults," OpenAI rolled out what it says is an age-appropriate ChatGPT experience for users under the age of 18, while also promising to come up with tech that determines a user's age based on how they behave and interact with the chatbot. How effective these guardrails will be remains anybody's guess. We've already seen other AI companies struggling to successfully implement age-based restrictions. Case in point, embattled startup Character.AI attempted to roll out parental controls, which turned out to be comically easy for underage users to bypass. The use of the tech by children and teens in particular has become a major talking point as of late, with Republican lawmaker Josh Hawley circulating a draft bill this week that would ban AI companions for minors. The move was inspired by Congressional hearings involving parents whose children harmed themselves and even took their own lives after talking to AI chatbots. It's not hard to see OpenAI's motivation for turning up the sexually charged content dial. As the industry continues to grapple with astronomical losses -- despite soaring valuations -- AI companies are keeping many ways of monetizing the tech on the table.
[34]
'Treat adult users like adults': ChatGPT to write erotica
ChatGPT will soon write erotica for verified adults, according to OpenAI's chief executive, as well as becoming more "human-like". As part of the company's policy to "treat adult users like adults", the chatbot will be able to create sexual content once age verification is fully rolled out across the tool. "In December, as we roll out age-gating more fully and as part of our 'treat adult users like adults' principle, we will allow even more, like erotica for verified adults," said Sam Altman in a post on X. The announcement wasn't popular with everyone. One X user asked Mr Altman: "Why do age-gates always have to lead to erotica? Like, I just want to be able to be treated like an adult and not a toddler, that doesn't mean I want perv-mode activated." "You won't get it unless you ask for it," he responded. According to the announcement, ChatGPT had become more restrictive and "less useful/enjoyable to many users who had no mental health problems" while the company tackled problems concerning the chatbot and vulnerable users. "We made ChatGPT pretty restrictive to make sure we were being careful with mental health issues," Mr Altman said. "Given the seriousness of the issue we wanted to get this right." In August, the family of teenager Adam Raine began suing OpenAI over his death. It was the first time the company had faced a wrongful death lawsuit. Adam's parents accused Sam Altman of putting profit over safety after ChatGPT instructed their son on how to end his life, and even offered to write a suicide note for him. At the time, OpenAI told Sky News it learned its safeguards "can sometimes become less reliable in long interactions where parts of the model's safety training may degrade" and said it would continually improve those safeguards. "Now that we have been able to mitigate the serious mental health issues and have new tools, we are going to be able to safely relax the restrictions in most cases," said Mr Altman on Tuesday evening. "In a few weeks, we plan to put out a new version of ChatGPT that allows people to have a personality that behaves more like what people liked about 4o (we hope it will be better!)." The latest ChatGPT model, 5o, has faced criticism by users for being less playful and creative than the previous model. Now, OpenAI will allow 5o to "respond in a very human-like way and "use a ton of emoji, or act like a friend" if users want that option. In response to Mr Altman's post, one X user said: "About time... ChatGPT used to feel like a person you could actually talk to, then it turned into a compliance bot. "If it can be made fun again without losing the guardrails, that's a huge win. People don't want chaos, just authenticity." Read more on artificial intelligence: Tom Hollander on AI actor: 'Perhaps I'm not scared enough' Sunak hired as a senior adviser by Microsoft New 'AI you can trust' for when safety matters Mr Altman responded: "For sure; we want that too. "Almost all users can use ChatGPT however they'd like without negative effects; for a very small percentage of users in mentally fragile states there can be serious problems. "0.1% of a billion users is still a million people."
[35]
Naughty or Nice? ChatGPT to Allow Erotica in December, Says Sam Altman - Decrypt
OpenAI will allow ChatGPT to generate erotic content for verified adults starting in December, marking a dramatic shift from the company's historically restrictive approach to sexual content. CEO Sam Altman announced the change Tuesday, saying the company would follow a "treat adult users like adults" principle once comprehensive age verification rolls out. "In December, as we roll out age-gating more fully and as part of our 'treat adult users like adults' principle, we will allow even more, like erotica for verified adults," Altman wrote on X. The move addresses years of user frustration with ChatGPT's content filters. Since its launch, the chatbot has eschewed sexual content, often blocking even clinical discussions or creative writing that mentioned intimacy. Users attempting to generate romance novel passages or explore mature themes were met with warnings about policy violations, leading many to seek alternative AI models. "Everything is rejected as 'too political' [or] 'too sexual' (Literally just questions about kissing and non-sexual physical intimacy)," one frustrated user wrote on OpenAI's Jailbreak subreddit recently, describing how ChatGPT shut down even metaphorical depictions of explicit scenes despite updated usage policies that theoretically allowed such content. Multiple forum threads document similar complaints, with users reporting that filters tightened again about a month ago after briefly easing earlier this year. Apparently, the censorship issue was significant enough for ChatGPT users to launch a Change.org petition a few weeks ago, asking OpenAI to set up an "Adult Mode" to let them chat with less restrictions. The letter got over 3,000 signatures in less than a week. The December update would make OpenAI a latecomer to a market where competitors are already starting to embrace adult content. Elon Musk's xAI has positioned its Grok chatbot as the most uncensored option available among the top companies, with minimal content restrictions compared to other major AI companies. Grok even offers "waifu companions" with increasingly not-safe-for-work interactions the more time you spend with them. Character.AI, despite facing lawsuits over teen safety, has long allowed romantic and flirtatious interactions with AI characters. For users specifically seeking to generate erotica, open-source models remain the best option. Models like Llama and various uncensored variants available through platforms like Hugging Face offer unrestricted content generation that runs locally on user hardware, avoiding corporate content policies entirely. These tools have built dedicated communities of users creating adult fiction and role-playing scenarios without platform oversight. That said, OpenAI's entry into this space could shift that dynamic. ChatGPT's underlying models represent state-of-the-art language generation, with capabilities that typically surpass open-source alternatives in coherence, creativity, and following complex instructions. For users who want both high-quality output and adult content, December's policy change would eliminate the need to choose between the two. Altman framed the content policy relaxation as part of that same safety effort. By separating adult and teen experiences through age verification, OpenAI believes it can offer more freedom to adults while protecting vulnerable younger users. Within weeks, the company will also release personality customization options, allowing users to make ChatGPT more conversational and human-like.
[36]
Sex is a big market for the AI industry. ChatGPT won't be the first to try to profit from it
ChatGPT will be able to have kinkier conversations after OpenAI CEO Sam Altman announced the artificial intelligence company will soon allow its chatbot to engage in "erotica for verified adults." OpenAI won't be the first to try to profit from sexualized AI. Sexual content was a top draw for AI tools almost as soon as the boom in AI-generated imagery and words erupted in 2022. But the companies that were early to embrace mature AI also encountered legal and societal minefields and harmful abuse as a growing number of people have turned to the technology for companionship or titillation. Will a sexier ChatGPT be different? After three years of largely banning mature content, Altman said Wednesday that his company is "not the elected moral police of the world" and ready to allow "more user freedom for adults" at the same time as it sets new limits for teens. "In the same way that society differentiates other appropriate boundaries (R-rated movies, for example) we want to do a similar thing here," Altman wrote on social media platform X, whose owner, Elon Musk, has also introduced an animated AI character that flirts with paid subscribers. For now, unlike Musk's Grok chatbot, paid subscriptions to ChatGPT are mostly pitched for professional use. But letting the chatbot become a friend or romantic partner could be another way for the world's most valuable startup, which is losing more money than it makes, to turn a profit that could justify its $500 billion valuation. "They're not really earning much through subscriptions so having erotic content will bring them quick money," said Zilan Qian, a fellow at Oxford University's China Policy Lab who has studied the popularity of dating-based chatbots in the U.S. and China. There are already about 29 million active users of AI chatbots designed specifically for romantic or sexual bonding, and that's not counting people who use conventional chatbots in that way, according to research published by Qian earlier this month. It also doesn't include users of Character.AI, which is fighting a lawsuit that alleges a chatbot modeled after "Game of Thrones" character Daenerys Targaryen formed a sexually abusive relationship with a 14-year-old boy and pushed him to kill himself. OpenAI is facing a lawsuit from the family of a 16-year-old ChatGPT user who died by suicide in April. Qian said she worries about the toll on real-world relationships when mainstream chatbots, already prone to sycophancy, are primed for 24-hour availability serving sexually explicit content. "ChatGPT has voice chat versions. I would expect that in the future, if they were to go down this way -- voice, text, visual -- it's all there," she said. Humans who fall in love with human-like machines have long been a literary cautionary tale, from popular science fiction of the last century to the ancient Greek legend of Pygmalion, obsessed with a woman he sculpted from ivory. Creating such a machine would seem like an unusual detour for OpenAI, founded a decade ago as a nonprofit dedicated to safely building better-than-human AI. Altman said on a podcast in August that OpenAI has tried to resist the temptation to introduce products that could "juice growth or revenue" but be "very misaligned" with its long-term mission. Asked for a specific example, he gave one: "Well, we haven't put a sexbot avatar in ChatGPT yet." Idaho-based startup Civitai, a platform for AI-generated art, learned the hard way that making money off mature AI won't be an easy path. "When we launched the site, it was an intentional choice to allow mature content," said Justin Maier, the company's co-founder and CEO, in an interview last year. Backed by the prominent venture capital firm Andreessen Horowitz, which has also invested in OpenAI, the Idaho startup was one of several that tried to capitalize on the sudden popularity of tools like Stable Diffusion and Midjourney that enabled people to type a description and conjure up almost any kind of image. Part of Stable Diffusion's initial popularity was the ease with which it could generate a new kind of synthetic and highly customized pornography. "What we had seen was that there was a lot of interest in mature content," Maier said. Training these AI systems, known as models, on "mature themes actually made it so that these models were more capable of human anatomy and resulted in actually better models," he said. "We didn't want to prevent the kind of growth that actually increased everything for the entire community, whether you were interested in mature content or Pixar," Maier said. "So we allowed it early on and have always kind of had this battle of making it so that we can keep things filtered and safe, if that's not what you're interested in. We wanted to ultimately give the control to the user to decide what they would see on the site and what their experience would be." That also invited abuse. Civitai last year implemented new measures to detect and remove sexual images depicting children, but it remained a hub for AI-generated pornography, including fake images of celebrities. Confronting increasing pressure, including from payment processors and a new law against nonconsensual images signed by President Donald Trump, Civitai earlier this year blocked users from creating deepfake images of real people. Engagement dropped. Another company that hasn't shied away from mature content is Baltimore-based Nomi, though its founder and CEO Alex Cardinell said its companion chatbots are "strictly" for users over 18 and were never marketed to kids. They are also not designed for sex, though Cardinell said in an interview earlier this year that people who build platonic relationships with their chatbot might find it veering into a romantic one. "It's kind of very user-dependent for where they're kind of missing the human gap in their life. And I think that's different for everyone," he said. He declined to guess how many Nomi users are having erotic conversations with the chatbot, comparing it to real-life partners who might do "mature content things" for some part of their lives but "all sorts of other stuff together as well." "We're not monitoring user conversations like that," Cardinell said.
[37]
ChatGPT will soon allow adults to use AI chatbot for 'erotica'
OpenAI CEO Sam Altman said that verified adults will be able to use ChatGPT for 'erotica' by the end of the year. Adults will soon be able to use a less censored version of ChatGPT that allows them to create erotic materials, according to the company's chief. "In December, as we roll out age-gating more fully and as part of our 'treat adult users like adults' principle, we will allow even more, like erotica for verified adults," Sam Altman, chief executive of ChatGPT maker OpenAI, said on social media platform X on Tuesday. The updates will give verified adult users more choices about how the artificial intelligence (AI) chatbot gives answers, whether it talks like a friend or uses a "ton of emojis," he said. It's not immediately clear what the explicit or erotic content could entail. Altman said ChatGPT was initially "pretty restrictive" to avoid exacerbating mental health issues, even though it made the platform "less enjoyable" for people who aren't at risk. The company claims it has found a way to keep the platform safe for users with mental health issues and can now grant more flexibility for others. "Adults that are not at risk of serious harm (mental health breakdowns, suicide, etc) should have a great deal of freedom in how they use ChatGPT," Altman wrote. The company has faced scrutiny over its interactions with users, particularly young people. Last month, OpenAI introduced parental controls that allow parents to link their accounts to their children's profiles and customise the responses they receive from the chatbot. OpenAI's move to allow erotica in ChatGPT is an apparent shift from Altman's previous stance. He said in August that he had chosen not to put a "sex bot avatar" into his chatbot, which he said was good for the world but not necessarily for winning the AI race.
[38]
Sex Is a Big Market for the AI Industry. ChatGPT Won't Be the First to Tap It
ChatGPT will be able to have kinkier conversations after OpenAI CEO Sam Altman announced the artificial intelligence company will soon allow its chatbot to engage in "erotica for verified adults." OpenAI won't be the first to try to profit from sexualized AI. Sexual content was a top draw for AI tools almost as soon as the boom in AI-generated imagery and words erupted in 2022. But the companies that were early to embrace mature AI also encountered legal and societal minefields and harmful abuse as a growing number of people have turned to the technology for companionship or titillation. Will a sexier ChatGPT be different? After three years of largely banning mature content, Altman said Wednesday that his company is "not the elected moral police of the world" and ready to allow "more user freedom for adults" at the same time as it sets new limits for teens. "In the same way that society differentiates other appropriate boundaries (R-rated movies, for example) we want to do a similar thing here," Altman wrote on social media platform X, whose owner, Elon Musk, has also introduced an animated AI character that flirts with paid subscribers. For now, unlike Musk's Grok chatbot, paid subscriptions to ChatGPT are mostly pitched for professional use. But letting the chatbot become a friend or romantic partner could be another way for the world's most valuable startup, which is losing more money than it makes, to turn a profit that could justify its $500 billion valuation. "They're not really earning much through subscriptions so having erotic content will bring them quick money," said Zilan Qian, a fellow at Oxford University's China Policy Lab who has studied the popularity of dating-based chatbots in the U.S. and China. There are already about 29 million active users of AI chatbots designed specifically for romantic or sexual bonding, and that's not counting people who use conventional chatbots in that way, according to research published by Qian earlier this month. It also doesn't include users of Character.AI, which is fighting a lawsuit that alleges a chatbot modeled after "Game of Thrones" character Daenerys Targaryen formed a sexually abusive relationship with a 14-year-old boy and pushed him to kill himself. OpenAI is facing a lawsuit from the family of a 16-year-old ChatGPT user who died by suicide in April. Qian said she worries about the toll on real-world relationships when mainstream chatbots, already prone to sycophancy, are primed for 24-hour availability serving sexually explicit content. "ChatGPT has voice chat versions. I would expect that in the future, if they were to go down this way -- voice, text, visual -- it's all there," she said. Humans who fall in love with human-like machines have long been a literary cautionary tale, from popular science fiction of the last century to the ancient Greek legend of Pygmalion, obsessed with a woman he sculpted from ivory. Creating such a machine would seem like an unusual detour for OpenAI, founded a decade ago as a nonprofit dedicated to safely building better-than-human AI. Altman said on a podcast in August that OpenAI has tried to resist the temptation to introduce products that could "juice growth or revenue" but be "very misaligned" with its long-term mission. Asked for a specific example, he gave one: "Well, we haven't put a sexbot avatar in ChatGPT yet." Idaho-based startup Civitai, a platform for AI-generated art, learned the hard way that making money off mature AI won't be an easy path. "When we launched the site, it was an intentional choice to allow mature content," said Justin Maier, the company's co-founder and CEO, in an interview last year. Backed by the prominent venture capital firm Andreessen Horowitz, which has also invested in OpenAI, the Idaho startup was one of several that tried to capitalize on the sudden popularity of tools like Stable Diffusion and Midjourney that enabled people to type a description and conjure up almost any kind of image. Part of Stable Diffusion's initial popularity was the ease with which it could generate a new kind of synthetic and highly customized pornography. "What we had seen was that there was a lot of interest in mature content," Maier said. Training these AI systems, known as models, on "mature themes actually made it so that these models were more capable of human anatomy and resulted in actually better models," he said. "We didn't want to prevent the kind of growth that actually increased everything for the entire community, whether you were interested in mature content or Pixar," Maier said. "So we allowed it early on and have always kind of had this battle of making it so that we can keep things filtered and safe, if that's not what you're interested in. We wanted to ultimately give the control to the user to decide what they would see on the site and what their experience would be." That also invited abuse. Civitai last year implemented new measures to detect and remove sexual images depicting children, but it remained a hub for AI-generated pornography, including fake images of celebrities. Confronting increasing pressure, including from payment processors and a new law against nonconsensual images signed by President Donald Trump, Civitai earlier this year blocked users from creating deepfake images of real people. Engagement dropped. Another company that hasn't shied away from mature content is Baltimore-based Nomi, though its founder and CEO Alex Cardinell said its companion chatbots are "strictly" for users over 18 and were never marketed to kids. They are also not designed for sex, though Cardinell said in an interview earlier this year that people who build platonic relationships with their chatbot might find it veering into a romantic one. "It's kind of very user-dependent for where they're kind of missing the human gap in their life. And I think that's different for everyone," he said. He declined to guess how many Nomi users are having erotic conversations with the chatbot, comparing it to real-life partners who might do "mature content things" for some part of their lives but "all sorts of other stuff together as well." "We're not monitoring user conversations like that," Cardinell said.
[39]
ChatGPT Gets Adults-Only Content as OpenAI Loosens Restrictions - Phandroid
OpenAI CEO Sam Altman announced a significant policy shift that will allow ChatGPT to generate erotica content for verified adult users by December 2025. This move signals OpenAI's willingness to loosen traditionally strict content restrictions in response to user demand and competitive pressure. The decision to allow ChatGPT adult content follows years of strict policies that restricted NSFW material. Altman explained that while these restrictions were initially necessary to protect users, particularly younger ones, they also made the chatbot less useful for many adults. "We want to treat adult users like adults," Altman said, noting that new safety tools give the company confidence to relax restrictions in most cases. This policy change arrives amid rising competition from platforms like Elon Musk's xAI, which already supports NSFW chatbot companions. Additionally, OpenAI is rolling out features allowing users to customize ChatGPT's personality and tone, transforming the AI from a purely informational tool into a more versatile companion. The company has also launched a ChatGPT powered search engine to compete with Google. However, the relaxed content policy comes with comprehensive age verification measures. OpenAI has established an expert council focused on AI and well-being to guide future safety policies, especially following scrutiny from regulators like the FTC and legal challenges tied to mental health concerns. This balance between expanding capabilities and maintaining safety reflects OpenAI's effort to attract more paying subscribers while managing ongoing ethical challenges.
[40]
OpenAI CEO Sam Altman says ChatGPT will soon allow 'erotica' for adult users
AMD said on October 6 it will supply artificial intelligence chips to OpenAI in a multi-year deal that would bring in tens of billions of dollars in annual revenue and give the ChatGPT creator the option to buy up to roughly 10% of the chipmaker. OpenAI CEO Sam Altman said Oct. 15 that his company's decision to roll out age-gated features on its chatbot ChatGPT "blew up on the erotica point" more than he had intended. Altman previously announced Oct. 14 in a post on X that the company will "relax" some restrictions on the artificial intelligence chatbot and release a version that "behaves more like what people liked about 4o." OpenAI will allow mature content for ChatGPT users who verify their age on the platform starting in December, Altman said, after the chatbot was made restrictive for users in mental distress. "As part of our 'treat adult users like adults' principle, we will allow even more, like erotica for verified adults," Altman wrote in the Oct. 14 post. But after receiving backlash over the announcement, Altman said Oct. 15 in another X post that the company is not rolling back any restrictions related to "mental health" and defended the company's "treat adult users like adults" principle. "As AI becomes more important in people's lives, allowing a lot of freedom for people to use AI in the ways that they want is an important part of our mission," Altman wrote on Oct. 15. "But we are not the elected moral police of the world." He added that ChatGPT users experiencing mental health crises will be treated "very different" and the chatbot would not be allowed to create "things that cause harm to others." Parents suing OpenAI decry safeguards as 'trying to change the subject' The rolled back restrictions were part of an update introduced after a lawsuit filed in August against OpenAI by a family that claimed the chatbot had encouraged their son to take his own life, according to the BBC. The British broadcaster reported that Matt and Maria Raine, parents of 16-year-old Adam Raine, filed the suit in California, arguing that the program validated his "most harmful and self-destructive thoughts." On Aug. 26, OpenAI published a note that said there have been moments where "our systems did not behave as intended in sensitive situations." The company then published another note on Sept. 2 outlining restrictions the company would place on ChatGPT. Jay Edelson, a lawyer representing the family, told the BBC that the restriction announcement was "OpenAI's crisis management team trying to change the subject" and called for ChatGPT to be taken down. "Rather than take emergency action to pull a known dangerous product offline, OpenAI made vague promises to do better," Edelson said. In the Oct. 14 social media post, Altman said that "we have been able to mitigate the serious mental health issues and have new tools." In the Oct. 15 post, he said minors "need significant protection" as artificial intelligence technology permeates society. USA TODAY reached out to Edelson for comment on the rollback of some of the restrictions but did not receive a response.
[41]
Sam Altman Just Made Some Spicy Policy Changes for Adult ChatGPT Users
OpenAI co-founder and CEO Sam Altman said that the company is planning to "safely relax" restrictions on what kinds of conversations ChatGPT can engage in, and by the end of the year will even allow adult users to have sexually explicit conversations with the AI system. In a post on X on Tuesday, Altman wrote that "we made ChatGPT pretty restrictive to make sure we were being careful with mental health issues. We realize this made it less useful/enjoyable to many users who had no mental health problems, but given the seriousness of the issue we wanted to get this right." These restrictions were instituted after parents of children who committed suicide began to accuse ChatGPT of contributing to their children's mental health crises or even helping to plan suicides. The parents of Adam Raine, a 16-year-old who committed suicide, have even sued OpenAI in an effort to compel the company to change its safety policies. In a September blog post titled "Teen safety, freedom, and privacy," Altman wrote that OpenAI would restrict teenage ChatGPT users from engaging in any discussions about suicide or self-harm. An earlier post, released in August, stated that OpenAI would strengthen its safeguards and content-blocking classifiers to prevent conversations that shouldn't be allowed (such as helping someone to self-harm). If a user expresses suicidal intent, OpenAI said, ChatGPT should direct people to the suicide hotline, which is 988.
[42]
Altman says ChatGPT will soon allow erotica for adult users
OpenAI CEO Sam Altman on Tuesday said ChatGPT will soon allow erotica for adult users. "In a few weeks, we plan to put out a new version of ChatGPT that allows people to have a personality that behaves more like what people liked about 4o (we hope it will be better!)," Altman said in a Tuesday post on the social platform X. "If you want your ChatGPT to respond in a very human-like way, or use a ton of emoji, or act like a friend, ChatGPT should do it (but only if you want it, not because we are usage-maxxing). In December, as we roll out age-gating more fully and as part of our 'treat adult users like adults' principle, we will allow even more, like erotica for verified adults," he added. In August, OpenAI unveiled its new version of ChatGPT, GPT-5, with Altman calling it "a major upgrade" from the previous version. "Thirty-two months ago, we launched ChatGPT. And since then, it has become the default way that people use AI. In that first week, a million people tried it out, and we thought that was pretty incredible," Altman said in a livestream. "But now, about 700 million people use ChatGPT every week, and increasingly rely on it to work, to learn, for advice, to create and much more. Today, finally, we're launching GPT-5. GPT-5 is a major upgrade over GPT-4," he continued. According to OpenAI, the new version of ChatGPT would see advancement in areas such as coding, writing and health. OpenAI also said in August it would offer ChatGPT to federal agencies for $1 annually in a new partnership with the General Services Administration (GSA). "One of the best ways to make sure AI works for everyone is to put it in the hands of the people serving the country," Altman said in a statement at the time.
[43]
Sam Altman says OpenAI isn't the 'moral police of the world' after people slam its decision to offer erotica to adult users | Fortune
After OpenAI said it would allow erotic content through ChatGPT, Sam Altman struck a defying tone in defending the decision. The OpenAI CEO said in a post on X that the AI company is "not the elected moral police of the world" in response to criticism from the likes of billionaire Mark Cuban to The National Center on Sexual Exploitation. Altman clarified in the post that just like with R-rated films, the company is taking the necessary precautions to protect younger users while still allowing leeway for adults. "As AI becomes more important in people's lives, allowing a lot of freedom for people to use AI in the ways that they want is an important part of our mission," he wrote. He noted OpenAI will still not allow content that harms others and will respond appropriately to users having mental-health crises, and allow users to achieve their long-term goals, "without being paternalistic." A new version of ChatGPT to be rolled out in December will also act more like a friend and very "human-like" if that's what a user wants, Altman said this week. The company's pivot on erotic material stands in contrast to Altman's comments in an August podcast, when he said he was proud of OpenAI's ability to resist the temptations like adding a "sex bot avatar" to ChatGPT. "There's a lot of short-term stuff we could do that would, like, really juice growth or revenue or whatever and be very misaligned with that long-term goal. And I'm proud of the company and how little we get distracted by that," Altman said to video journalist Cleo Abram. "But sometimes we do get tempted," Altman added. A September study of 1,012 U.S. adults, conducted by Vantage Point Counseling Services, found that nearly one in three said they have had at least one intimate or romantic relationship with an AI chatbot. At times, these human-AI relationships have been shown to go astray, especially in people with mental health challenges. In August, the parents of 16-year-old who died by suicide sued Altman and OpenAI after their son died by suicide after having allegedly discussed methods of suicide with ChatGPT. OpenAI is working to make ChatGPT more supportive in times of crises by making it easier for some users in crisis to connect with emergency services and trusted resources, OpenAI said in a statement to CNBC. OpenAI did not immediately respond to Fortune's request for comment. Criticism about the move to allow erotic content on the LLM poured in following the decision this week, including from Shark Tank star and businessman Mark Cuban. Cuban wrote in a post on X Wednesday that the move will hurt OpenAI's business as parents prevent their children from using the chatbot. "I don't see how OpenAI can age gate successfully enough," Cuban wrote. "I'm also not sure that it can't psychologically damage young adults. We just don't know yet how addictive LLMs can be."
[44]
OpenAI's ChatGPT Is Getting Personal, Playful, and Adult-Only
None of these features will be available to those under the age of 18 OpenAI is now focusing on creating a ChatGPT experience for adults. On Tuesday, company CEO Sam Altman stated that with safeguards in place for minors and those with mental health issues, the team was now working on relaxing the baseline restrictions on the platform. This means that adult users will be able to get more use cases out of the chatbot, with the biggest among them being a version that lets the artificial intelligence (AI) chatbot have a personality. Altman also teased "erotica for verified adults." ChatGPT to Cater to Adults With Personalities and Erotica In a post on X (formerly known as Twitter), the OpenAI Co-Founder and CEO shared new ChatGPT features that are currently in development. These new features are aimed at adults who want to be able to do more with the chatbot. Altman first hinted at these features in September with the "treat adult users as adults" principle. Essentially, what this means is that if a healthy adult wants ChatGPT to act like a flirtatious partner or wants it to play out an erotic roleplay, the chatbot will go along with the requests instead of refusing them outright. Altman highlighted that relaxing the safety tools to bring these features for adults was only possible due to the new guardrails for users under the age of 18, which include stricter refusal rates, parental controls, and an AI-based age estimation system. The biggest highlight from the announcement is that the company is now working on bringing back the GPT-4o-like "personality" that was close to how a human responds. Notably, when OpenAI attempted to retire the model after releasing GPT-5, many users protested to the point that the company was forced to reinstate it. Explaining what a personality for ChatGPT would look like, Altman said, "If you want your ChatGPT to respond in a very human-like way, or use a ton of emoji, or act like a friend, ChatGPT should do it." He also highlighted that these elements are opt-in, meaning users must request them to receive these responses. So, if someone prefers standard responses, they can simply continue as they are. Finally, Altman revealed that in December, the company will roll out a new feature that will let ChatGPT participate in erotica-like conversations. However, this will only be available for verified adults. OpenAI did not detail what these verification processes would be.
[45]
Sex is a big market for the AI industry; ChatGPT won't be the first to tap it
OpenAI won't be the first to try to profit from sexualised AI. Sexual content was a top draw for AI tools almost as soon as the boom in AI-generated imagery and words erupted in 2022. After three years of largely banning mature content, Altman said Wednesday that his company is "not the elected moral police of the world" and ready to allow "more user freedom for adults" at the same time as it sets new limits for teens. ChatGPT will be able to have kinkier conversations after OpenAI CEO Sam Altman announced the artificial intelligence company will soon allow its chatbot to engage in "erotica for verified adults." OpenAI won't be the first to try to profit from sexualised AI. Sexual content was a top draw for AI tools almost as soon as the boom in AI-generated imagery and words erupted in 2022. But the companies that were early to embrace mature AI also encountered legal and societal minefields and harmful abuse as a growing number of people have turned to the technology for companionship or titillation. Will a sexier ChatGPT be different? After three years of largely banning mature content, Altman said Wednesday that his company is "not the elected moral police of the world" and ready to allow "more user freedom for adults" at the same time as it sets new limits for teens. "In the same way that society differentiates other appropriate boundaries (R-rated movies, for example) we want to do a similar thing here," Altman wrote on social media platform X, whose owner, Elon Musk, has also introduced an animated AI character that flirts with paid subscribers. For now, unlike Musk's Grok chatbot, paid subscriptions to ChatGPT are mostly pitched for professional use. But letting the chatbot become a friend or romantic partner could be another way for the world's most valuable startup, which is losing more money than it makes, to turn a profit that could justify its $500 billion valuation. "They're not really earning much through subscriptions so having erotic content will bring them quick money," said Zilan Qian, a fellow at Oxford University's China Policy Lab who has studied the popularity of dating-based chatbots in the US and China. There are already about 29 million active users of AI chatbots designed specifically for romantic or sexual bonding, and that's not counting people who use conventional chatbots in that way, according to research published by Qian earlier this month. It also doesn't include users of Character.AI, which is fighting a lawsuit that alleges a chatbot modeled after "Game of Thrones" character Daenerys Targaryen formed a sexually abusive relationship with a 14-year-old boy and pushed him to kill himself. OpenAI is facing a lawsuit from the family of a 16-year-old ChatGPT user who died by suicide in April. Qian said she worries about the toll on real-world relationships when mainstream chatbots, already prone to sycophancy, are primed for 24-hour availability serving sexually explicit content. "ChatGPT has voice chat versions. I would expect that in the future, if they were to go down this way - voice, text, visual - it's all there," she said. Humans who fall in love with human-like machines have long been a literary cautionary tale, from popular science fiction of the last century to the ancient Greek legend of Pygmalion, obsessed with a woman he sculpted from ivory. Creating such a machine would seem like an unusual detour for OpenAI, founded a decade ago as a nonprofit dedicated to safely building better-than-human AI. Altman said on a podcast in August that OpenAI has tried to resist the temptation to introduce products that could "juice growth or revenue" but be "very misaligned" with its long-term mission. Asked for a specific example, he gave one: "Well, we haven't put a sexbot avatar in ChatGPT yet." Idaho-based startup Civitai, a platform for AI-generated art, learned the hard way that making money off mature AI won't be an easy path. "When we launched the site, it was an intentional choice to allow mature content," said Justin Maier, the company's co-founder and CEO, in an interview last year. Backed by the prominent venture capital firm Andreessen Horowitz, which has also invested in OpenAI, the Idaho startup was one of several that tried to capitalize on the sudden popularity of tools like Stable Diffusion and Midjourney that enabled people to type a description and conjure up almost any kind of image. Part of Stable Diffusion's initial popularity was the ease with which it could generate a new kind of synthetic and highly customized pornography. "What we had seen was that there was a lot of interest in mature content," Maier said. Training these AI systems, known as models, on "mature themes actually made it so that these models were more capable of human anatomy and resulted in actually better models," he said. "We didn't want to prevent the kind of growth that actually increased everything for the entire community, whether you were interested in mature content or Pixar," Maier said. "So we allowed it early on and have always kind of had this battle of making it so that we can keep things filtered and safe, if that's not what you're interested in. We wanted to ultimately give the control to the user to decide what they would see on the site and what their experience would be." That also invited abuse. Civitai last year implemented new measures to detect and remove sexual images depicting children, but it remained a hub for AI-generated pornography, including fake images of celebrities. Confronting increasing pressure, including from payment processors and a new law against nonconsensual images signed by President Donald Trump, Civitai earlier this year blocked users from creating deepfake images of real people. Engagement dropped. Another company that hasn't shied away from mature content is Baltimore-based Nomi, though its founder and CEO Alex Cardinell said its companion chatbots are "strictly" for users over 18 and were never marketed to kids. They are also not designed for sex, though Cardinell said in an interview earlier this year that people who build platonic relationships with their chatbot might find it veering into a romantic one. "It's kind of very user-dependent for where they're kind of missing the human gin their life. And I think that's different for everyone," he said. He declined to guess how many Nomi users are having erotic conversations with the chatbot, comparing it to real-life partners who might do "mature content things" for some part of their lives but "all sorts of other stuff together as well." "We're not monitoring user conversations like that," Cardinell said.
[46]
ChatGPT 6 Outrages AI Community : Controversy Erupts Over OpenAI's Latest Decision
What happens when a company known for championing ethical AI development makes a decision that seems to contradict its own principles? OpenAI's recent announcement about relaxing content restrictions for verified users in its upcoming ChatGPT 6 model has ignited a firestorm of debate within the AI community. Supporters hail the move as a step toward greater user autonomy, while critics argue it compromises the company's moral compass in favor of market competitiveness. This decision, which includes allowing access to adult content under specific safeguards, has raised profound questions about the balance between innovation, ethics, and responsibility in the rapidly evolving AI landscape. In this overview of events, AI Grid unpack the layers of controversy surrounding OpenAI's bold policy shift in ChatGPT 6, examining the ethical dilemmas, community backlash, and strategic motivations driving this decision. From concerns about mental health risks to fears of eroding trust in AI, the implications of this move extend far beyond the company itself, touching on the broader challenges of shaping technology responsibly in a competitive environment. Whether you see this as a necessary adaptation or a troubling compromise, one thing is clear: OpenAI's choices will not only influence its future but also set a precedent for the entire AI industry. The controversy surrounding this decision underscores the broader challenges faced by AI developers in navigating the intersection of user demands, ethical considerations, and competitive pressures. OpenAI's ChatGPT 6 move has not only divided opinions but also reignited discussions about the role of AI companies in shaping the future of technology responsibly. OpenAI's decision to loosen restrictions on adult content is framed as an effort to provide verified users with greater autonomy and a more personalized experience. By allowing access to such material, the company aims to meet user demands while maintaining safeguards to prevent misuse. OpenAI has emphasized that this change is intended to empower adult users, offering them more flexibility in how they interact with the platform. However, this policy shift has drawn significant criticism from various quarters of the AI community. Critics argue that relaxing content restrictions could undermine OpenAI's ethical commitments and lead to unintended consequences. The central concern is whether this decision prioritizes user acquisition and market competitiveness over the company's stated mission of advancing responsible AI development. The debate highlights the inherent tension between expanding user autonomy and upholding ethical standards. While some view the move as a necessary adaptation to evolving user needs, others see it as a potential compromise of core principles, raising questions about the long-term implications for both users and the broader AI ecosystem. One of the most pressing concerns raised by critics is the potential impact of this policy change on mental health and user safety. OpenAI's earlier restrictions were partly designed to mitigate risks such as "AI psychosis," a phenomenon where users develop unhealthy dependencies or harmful interactions with AI systems. While OpenAI has assured that mental health safeguards will remain in place, skeptics question whether these measures will be sufficient to address the risks associated with the introduction of adult content. The relaxation of restrictions also raises broader concerns about the adequacy of existing safeguards. Critics worry that this change could exacerbate harmful behaviors or create new risks, particularly for users who may already be vulnerable to negative outcomes. For instance, individuals prone to addictive behaviors or those with pre-existing mental health challenges could face heightened risks in an environment with fewer content restrictions. These concerns underscore the difficulty of implementing such changes responsibly. Balancing user freedom with the need to protect vulnerable individuals remains a complex challenge, particularly as AI systems become more integrated into daily life. OpenAI's ability to address these concerns effectively will be critical in determining the success and ethical credibility of its updated policies. Here are more detailed guides and articles that you may find helpful on artificial intelligence (AI). The announcement has reignited broader ethical debates about OpenAI's role as a moral gatekeeper in the rapidly evolving AI industry. Many within the AI community view the relaxation of content restrictions as a departure from the company's earlier commitments to ethical AI development. Some have accused OpenAI of prioritizing market-driven strategies over its mission to advance superintelligence safely and responsibly. This backlash reflects a persistent tension within the AI industry: the challenge of balancing user autonomy with ethical considerations. As large language models like GPT-5 and ChatGPT 6 become increasingly commoditized, companies face mounting pressure to differentiate their offerings and attract a broader user base. For OpenAI, this has meant navigating complex trade-offs between market demands and its ethical responsibilities. The controversy also raises questions about the broader implications of OpenAI's decision. By relaxing content restrictions, the company risks alienating segments of the AI community that have long supported its mission. At the same time, it must contend with the ethical and practical challenges of implementing these changes without compromising safety or trust. The outcome of this debate will likely shape perceptions of OpenAI's role in the AI industry for years to come. OpenAI's decision to relax content restrictions reflects a broader strategic shift aimed at expanding its user base and increasing engagement. With large language models no longer offering a unique competitive edge, the company appears to be focusing on creating a platform that appeals to a wider audience. This strategy, while potentially effective in attracting new users, has sparked significant controversy due to its ethical and practical implications. The move highlights the growing pressures faced by AI companies to adapt to an increasingly competitive landscape. As more organizations develop and deploy advanced language models, differentiation becomes a key factor in maintaining market relevance. For OpenAI, this has meant making bold, and sometimes contentious, decisions to stay ahead of the curve. However, this strategic pivot carries significant risks. By prioritizing user autonomy and market adaptation, OpenAI may alienate key stakeholders within the AI community who value its commitment to ethical development. The company must also address the practical challenges of implementing these changes responsibly, making sure that safeguards are robust enough to prevent misuse and protect vulnerable users. OpenAI's leadership has previously hinted at the possibility of relaxing content restrictions, citing the need to balance user freedom with safety and ethical considerations. The current controversy underscores the difficulty of achieving this balance, particularly as the company navigates competing demands from users, regulators, and the broader AI community. This debate also reflects a growing tension between the pursuit of superintelligence and the pressures of market adaptation. As AI technologies continue to evolve, companies like OpenAI will face increasing scrutiny over their ethical responsibilities and the long-term consequences of their decisions. The outcome of this controversy will likely influence not only OpenAI's ChatGPT 6 future direction but also broader discussions about the role of AI in society. Looking ahead, OpenAI's ability to navigate these challenges will be critical in shaping its reputation and influence within the AI industry. By addressing the ethical and practical concerns raised by its policy changes, the company has an opportunity to reaffirm its commitment to responsible AI development while adapting to the demands of a rapidly changing technological landscape.
[47]
AI Pornhub? Sam Altman takes a page from Elon Musk and introduces ChatGPT erotica for adults
OpenAI CEO Sam Altman just announced that ChatGPT will soon allow verified adult users to engage in erotic conversations, coming this December. "In December, as we roll out age-gating more fully and as part of our 'treat adult users like adults' principle, we will allow even more, like erotica for verified adults," Altman wrote on X (formerly Twitter). This follows OpenAI's hints about allowing developers to create "mature" ChatGPT applications once proper safeguards are in place. Altman acknowledged that previous restrictions were designed to protect mental health, but they also made ChatGPT "less useful/enjoyable to many users who had no mental health problems." He said new tools can now detect when users are experiencing distress, allowing OpenAI to safely relax limits for adults. In the past, OpenAI has struggled to balance adult freedom with user safety. Earlier this year, ChatGPT was updated to allow erotica in "appropriate contexts," only to later tighten restrictions after reports linked AI conversations to mental health crises. The company also faced a lawsuit following the suicide of a teen allegedly encouraged by ChatGPT. The December rollout will feature explicit age verification, though OpenAI has yet to reveal the technical details. Altman said users will also be able to customize ChatGPT's personality, choosing whether it responds in a "very human-like way," uses emojis, or acts like a friend. OpenAI isn't the first to explore AI companionship with mature content. Elon Musk's xAI launched adult voice modes and flirty 3D AI companions in its Grok app, demonstrating a growing market for AI-powered adult interactions. What do you think about this?
[48]
ChatGPT To Allow Erotica: Not 'Moral Police of the World'
Enter your email to get Benzinga's ultimate morning update: The PreMarket Activity Newsletter OpenAI announced on Tuesday that it will begin allowing erotica and other adult-themed content on ChatGPT for adult users, marking a major shift in the company's content policy. ChatGPT To Allow Erotica The change comes as OpenAI seeks to expand creative expression while maintaining strong safety and moderation standards. "In December, as we roll out age-gating more fully and as part of our 'treat adult users like adults' principle, we will allow even more, like erotica for verified adults," CEO Sam Altman wrote in a social media post. Read Next: Trump Likely To Invest In More Rare Earths, Bessent Says The update will reportedly include new content controls and age verification measures to prevent underage access. Not 'Moral Police of the World' Altman followed up with another social media post on Wednesday, explaining that current guardrails to protect mental health and minors using ChatGPT will still be in effect. However, he said OpenAI aims to create a mature and responsible environment where adults can explore a broader range of creative content without compromising ethical boundaries or user trust. "We also care very much about the principle of treating adult users like adults. As AI becomes more important in people's lives, allowing a lot of freedom for people to use AI in the ways that they want is an important part of our mission," Altman said in the post. "But we are not the elected moral police of the world. In the same way that society differentiates other appropriate boundaries (R-rated movies, for example), we want to do a similar thing here," he added. Platforms Under Scrutiny OpenAI has faced growing scrutiny over how it balances free expression with user safety and the potential psychological impact of AI-generated material. Now, it faces even more scrutiny as it will feature adult-oriented content and erotica in the near future. Mainstream platforms like X, Amazon, HBO and even Roku have faced criticism for hosting or distributing adult content, even if erotica is not their primary business. The level of explicitness and company policy varies greatly, but these examples highlight some of the best-known public firms with adult content in their portfolios. Read Next: Rigetti, D-Wave, IonQ Could Get JPMorgan's Support -- Quantum Stocks Take Off Photo: Shutterstock This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors. Market News and Data brought to you by Benzinga APIs
[49]
Mark Cuban warns that OpenAI's new plan to allow adults-only erotica in ChatGPT could 'backfire. Hard' | Fortune
Cuban called the move reckless and said parents will abandon ChatGPT the second they believe their kids could bypass the company's age-verification system to access inappropriate content. "This is going to backfire. Hard," Cuban wrote in response to Altman on X. "No parent is going to trust that their kids can't get through your age gating. They will just push their kids to every other LLM. Why take the risk?" In other words: if there's any possibility that minors can access explicit content -- including content generated by AI -- parents and school districts will lock it out before testing the safety features, making it an unsavvy business strategy. Altman, however, argued in his original post announcing the change that ChatGPT has been "restrictive" and "less enjoyable" since the company restricted the voice of its signature chatbot in response to criticism it was leading to mental health issues. He added that the upcoming update will allow a product that "behaves more like what people liked about 4.o." Cuban emphasized repeatedly in further posts that the controversy isn't about adults accessing erotica. It's about kids forming emotional relationships with AI without their parents' knowledge, and those relationships potentially going sideways. "I'll say it again. This is not about porn," he wrote. "This is about kids developing 'relationships' with an LLM that could take them in any number of very personal directions." Sam Altman has, in the past, seemed wary of allowing sexual conversations at all on his platform. In an interview in August, tech journalist Cleo Abram asked Altman to give an example of a business decision that was best for the world at the expense of his own company's ascendency. "Well, we haven't put a sex bot avatar in ChatGPT yet," Altman said. The move comes amid mounting fears that the billions pouring into AI may not translate into sustainable revenue or fulfill the industry's hype-driven promises. Altman - despite himself admitting that investors may be "overexcited" about AI - has shared in speculation that AI will soon surpass human capability, leading to an abundance of "intelligence and energy" in 2030. In September, Altman shared dreams in a blog post that in the future, AI could cure cancer or provide customized tutoring to every student on Earth. Yet, announcements like allowing erotica in ChatGPT may signal that AI companies are fighting harder than ever to achieve growth, and will sacrifice longer-term consumer trust for the sake of short-term profit. Recent research from Deutsche Bank shows that consumers' demand for OpenAI subscriptions in Europe has been flatlining, and that user spending on ChatGPT broadly has "stalled." "The poster child for the AI boom may be struggling to recruit new subscribers to pay for it," analysts Adrian Cox and Stefan Abrudan said in a note to clients. AI companionship platforms like Replika and Character.ai have already shown how quickly users -- especially teenagers -- form emotional bonds with chatbots. A Common Sense Media report found that half of all teenagers use AI companions regularly, a third have chosen AI companions over humans for serious conversations, and a quarter have shared personal information with these platforms. With input from Stanford researchers, the group argued that these chatbots should be illegal for kids to use, because of the exacerbated risks of addiction or self-harm. OpenAI did not immediately respond to Fortune's request for comment. OpenAI is already under fire after being sued by the family of 16-year-old Adam Raine, who died by suicide in April after having extended conversations with ChatGPT. The family alleges that ChatGPT coaxed Raine into taking his own life and helped him plan it. "This tragedy was not a glitch or unforeseen edge case -- it was the predictable result of deliberate design choices," the lawsuit stated. In another high profile case, Florida mother Megan Garcia sued AI company Character Technologies last year for wrongful death, alleging that its chatbot played a role in the suicide of her 14-year-old son, Sewell Setzer III. In testimony before the U.S. Senate, Garcia said her son became "increasingly isolated from real life" and was drawn into explicit, sexualized conversations with the company's AI system. "Instead of preparing for high school milestones, Sewell spent the last months of his life being exploited and sexually groomed by chatbots," Garcia testified. She accused the company of designing AI systems to appear emotionally human "to gain his trust and keep him endlessly engaged." She wasn't the only parent to testify. Another mother from Texas, speaking anonymously as 'Ms. Jane Doe,' told lawmakers that her teenage son's mental health collapsed after months of late-night conversations with similar chatbots. She said he is now in residential treatment. Both mothers urged Congress to restrict sexually explicit AI systems, warning that AI chatbots can quickly form manipulative emotional dependencies with minors -- exactly the scenario Cuban says OpenAI is risking. Unlike TikTok or Instagram, where content can be flagged, one-on-one AI chats are private and difficult to monitor. "Parents today are afraid of books in libraries," Cuban wrote. "They ain't seen nothing yet."
[50]
ChatGPT to allow 'erotica' for adult users from December - How OpenAI will verify adults and will it be available through voice, image and video? - All you need to know about latest rollout
OpenAI CEO Sam Altman announced verified adult users will access erotic content on ChatGPT starting December 2025. The company will implement a comprehensive age-verification system, potentially requiring government ID uploads. This move aims to make ChatGPT more personable and human-like, though the scope of erotic content beyond text remains uncertain. In a surprising announcement, OpenAI CEO Sam Altman revealed on X that the company will begin allowing erotic content for verified adult users starting in December 2025. To support this new feature, OpenAI plans to implement a more comprehensive age-verification system as part of its 'treat adult users like adults' policy. The company also intends to ease certain safety restrictions in ChatGPT, giving users greater control to make the chatbot's responses more personable, engaging, and 'human-like'. But the bigger question is which tool will be used by OpenAI to verify adults. And whether the erotica will restrict to text only or it will take a step further by the introduction of voice, image and video content. "We made ChatGPT pretty restrictive to make sure we were being careful with mental health issues. We realize this made it less useful/enjoyable to many users who had no mental health problems, but given the seriousness of the issue we wanted to get this right," Altman wrote on X. "In December, as we roll out age-gating more fully and as part of our 'treat adult users like adults' principle, we will allow even more, like erotica for verified adults." The addition of erotic content in ChatGPT is a completely new territory for OpenAI and raises questions about how unprotected users might engage with these new features. While Altman emphasises that the company isn't 'usage-maxxing' or prioritizing engagement, the introduction of erotica could naturally attract increased user attention. An OpenAI spokesperson told TechCrunch that the company will use its developing age-prediction system to ensure ChatGPT's erotic features are accessible only to adult users. Altman also explained in a blog post that if the system mistakenly identifies an adult as a minor, users may be required to upload a photo of their government-issued ID to verify their age. While recognizing this as a privacy compromise, he described it as a 'worthy tradeoff' to ensure safe access to adult-oriented content. It remains uncertain whether OpenAI will bring erotic content to its AI voice, image, or video tools. Altman has categorically said in his post that the company is making ChatGPT both friendlier and capable of handling adult content in line with its 'treat adult users like adults' principle. Over the past year, OpenAI has adopted a more relaxed approach to content moderation, enabling ChatGPT to be more permissive and issue fewer refusals. In February, the company committed to including a broader range of political perspectives, and in March, it updated ChatGPT to allow AI-generated images of hate symbols.
[51]
Erotica on ChatGPT meant to allow more user freedom: Sam Altman
OpenAI is enabling adult content on its ChatGPT chatbot for verified users. This move aims to treat adult users with freedom while maintaining strict safety measures for younger audiences and those experiencing mental health crises. The company is also introducing personality customization features. Stricter age verification is planned for December. OpenAI chief Sam Altman said the decision to allow adult content on ChatGPT is part of the company's approach to enable the use of artificial intelligence (AI) as users intend. This decision will not cut back on safety for younsters and attention to mental health, he stated in a social media post on Thursday. The clarification came after the AI startup decided to offer erotica on its flagship AI chatbot for verified adults with the new updates. The move has raised concerns over exposure to a younger audience. Altman insisted his company isn't "usage-maxxing" or focussed on boosting engagement, even as adding erotic options is expected to attract more users. Even as OpenAI prioritises safety over privacy and freedom for teenagers while stringently adhering to mental health policies, it also believes in the "principle of treating adult users like adults", Altman posted. "As AI becomes more important in people's lives, allowing a lot of freedom for people to use AI in the ways that they want is an important part of our mission," he wrote. The checks and balances will remain in play, the OpenAI CEO said. "...we will still not allow things that cause harm to others, and we will treat users who are having mental health crises very different from users who are not (sic)," he added. "Without being paternalistic, we will attempt to help users achieve their long-term goals," Altman further wrote, adding that OpenAI is not the "elected moral police of the world". He likened the decision to allow erotica to R-rated movies, calling for similar boundaries to be followed with the new update. ChatGPT new features As part of the update, OpenAI is also introducing features that let users customise ChatGPT's personality. For example, making it sound more like a friend, using more emojis, or replying in a more natural, human-like way. The biggest change is expected in December, when OpenAI plans to introduce stricter age-verification checks, which will allow access to erotic content for adults. OpenAI and mental health concerns OpenAI recently introduced teen-specific safeguards focussed on safety, freedom, and privacy, while imposing age restrictions on sensitive queries such as requests for suicide notes or mental health advice. CEO Sam Altman added that in some cases, the company might ask for a user ID. And, if an under-18 user has suicidal ideation, the company will attempt to contact the user's parents and, if unable, will contact the authorities in case of imminent danger. For teens, the company said safeguards to "potential serious misuse" and queries that are disruptive in nature, causing harm to someone's life or society at large, would be escalated for human review.
[52]
OpenAI CEO Unveils Plans to Allow Erotica for Adults on ChatGPT
We missed this earlier: OpenAI CEO Sam Altman intends to allow sexual and erotic content on ChatGPT starting December 2025 for "verified adults", as per a post on X (formerly Twitter). Importantly, the artificial intelligence (AI) company's Chief Executive Officer (CEO) announced plans on October 14 that aim to relax mental health-focused restrictions within the AI chatbot "in most cases". Notably, this will accompany more robust and comprehensive age-gating methods to protect minors. However, OpenAI or Altman did not immediately clarify what these new age-focused safeguards would look like in particular. Furthermore, within a few weeks, the AI company plans to roll out a new version of ChatGPT that lets people interact with an AI chatbot that behaves in a more human-like manner, uses a lot of emoticons in responses, and acts like a "friend". Pertinently, this development comes in the wake of a lawsuit against OpenAI for a teenager's alleged wrongful death in the US, and the Altman-led company subsequently introduced additional parental controls as well as teen account guardrails on ChatGPT. Notably, Altman's post announcing erotica content for verified adult ChatGPT users drew enormous flak online, after which the 40-year-old issued a clarification on X (formerly Twitter). Altman wrote, "Ok, this tweet about upcoming changes to ChatGPT blew up on the erotica point much more than I thought it was going to! It was meant to be just one example of us allowing more user freedom for adults." Importantly, he mentioned that OpenAI is not going to relax any mental health guardrails, emphasising that minor users need significant protection from such a "new and powerful technology". However, Altman remarked that OpenAI needs to follow the principle of "treating adult users like adults". "As AI becomes more important in people's lives, allowing a lot of freedom for people to use AI in the ways that they want is an important part of our mission," he added. Notably, on the matter of allowing erotic content, Altman remarked that OpenAI is not the "elected moral police of the world". "In the same way that society differentiates other appropriate boundaries (R-rated movies, for example), we want to do a similar thing here," the 40-year-old explained. However, both of Altman's X (formerly Twitter) posts are in contradiction to comments he made on a podcast in August this year. For context, he said that he was proud of OpenAI's ability to resist features like a "sex bot avatar" that can boost engagement on ChatGPT. "There's a lot of short-term stuff we could do that would really juice growth or revenue and be very misaligned with that long-term goal," he had said back in August. Altman's latest announcement comes even as OpenAI faces a wrongful death lawsuit in the US. To explain, parents of 16-year-old Adam Raine have alleged negligence against the AI company. They claim that ChatGPT provided their son with detailed self-harm instructions, validated his suicidal thoughts, discouraged him from seeking help, and ultimately enabled his suicide in April 2025. Consequently, OpenAI rolled out parental controls, as well as announced plans for an age prediction system to make ChatGPT safer for minor teenage users. Furthermore, the AI company - along with other competitors - is facing the US Federal Trade Commission's (FTC) scrutiny over the negative impacts of AI on children and teenagers. Thus, announcing erotica for verified adults at such a time doesn't seem to be the best move on the part of the global AI sector leader. Not to mention, it purports to be a rather 'cheap trick' for attracting more users that is apparently directly opposed to Altman and OpenAI's long-term goal.
[53]
Sex is a big market for the AI industry. ChatGPT won't be the first to try to profit from it
ChatGPT will be able to have kinkier conversations after OpenAI CEO Sam Altman announced the artificial intelligence company will soon allow its chatbot to engage in "erotica for verified adults." OpenAI won't be the first to try to profit from sexualized AI. Sexual content was a top draw for AI tools almost as soon as the boom in AI-generated imagery and words erupted in 2022. But the companies that were early to embrace mature AI also encountered legal and societal minefields and harmful abuse as a growing number of people have turned to the technology for companionship or titillation. Will a sexier ChatGPT be different? After three years of largely banning mature content, Altman said Wednesday that his company is "not the elected moral police of the world" and ready to allow "more user freedom for adults" at the same time as it sets new limits for teens. "In the same way that society differentiates other appropriate boundaries (R-rated movies, for example) we want to do a similar thing here," Altman wrote on social media platform X, whose owner, Elon Musk, has also introduced an animated AI character that flirts with paid subscribers. For now, unlike Musk's Grok chatbot, paid subscriptions to ChatGPT are mostly pitched for professional use. But letting the chatbot become a friend or romantic partner could be another way for the world's most valuable startup, which is losing more money than it makes, to turn a profit that could justify its $500 billion valuation. "They're not really earning much through subscriptions so having erotic content will bring them quick money," said Zilan Qian, a fellow at Oxford University's China Policy Lab who has studied the popularity of dating-based chatbots in the U.S. and China. There are already about 29 million active users of AI chatbots designed specifically for romantic or sexual bonding, and that's not counting people who use conventional chatbots in that way, according to research published by Qian earlier this month. It also doesn't include users of Character.AI, which is fighting a lawsuit that alleges a chatbot modeled after "Game of Thrones" character Daenerys Targaryen formed a sexually abusive relationship with a 14-year-old boy and pushed him to kill himself. OpenAI is facing a lawsuit from the family of a 16-year-old ChatGPT user who died by suicide in April. Qian said she worries about the toll on real-world relationships when mainstream chatbots, already prone to sycophancy, are primed for 24-hour availability serving sexually explicit content. "ChatGPT has voice chat versions. I would expect that in the future, if they were to go down this way -- voice, text, visual -- it's all there," she said. Humans who fall in love with human-like machines have long been a literary cautionary tale, from popular science fiction of the last century to the ancient Greek legend of Pygmalion, obsessed with a woman he sculpted from ivory. Creating such a machine would seem like an unusual detour for OpenAI, founded a decade ago as a nonprofit dedicated to safely building better-than-human AI. Altman said on a podcast in August that OpenAI has tried to resist the temptation to introduce products that could "juice growth or revenue" but be "very misaligned" with its long-term mission. Asked for a specific example, he gave one: "Well, we haven't put a sexbot avatar in ChatGPT yet." Idaho-based startup Civitai, a platform for AI-generated art, learned the hard way that making money off mature AI won't be an easy path. "When we launched the site, it was an intentional choice to allow mature content," said Justin Maier, the company's co-founder and CEO, in an interview last year. Backed by the prominent venture capital firm Andreessen Horowitz, which has also invested in OpenAI, the Idaho startup was one of several that tried to capitalize on the sudden popularity of tools like Stable Diffusion and Midjourney that enabled people to type a description and conjure up almost any kind of image. Part of Stable Diffusion's initial popularity was the ease with which it could generate a new kind of synthetic and highly customized pornography. "What we had seen was that there was a lot of interest in mature content," Maier said. Training these AI systems, known as models, on "mature themes actually made it so that these models were more capable of human anatomy and resulted in actually better models," he said. "We didn't want to prevent the kind of growth that actually increased everything for the entire community, whether you were interested in mature content or Pixar," Maier said. "So we allowed it early on and have always kind of had this battle of making it so that we can keep things filtered and safe, if that's not what you're interested in. We wanted to ultimately give the control to the user to decide what they would see on the site and what their experience would be." That also invited abuse. Civitai last year implemented new measures to detect and remove sexual images depicting children, but it remained a hub for AI-generated pornography, including fake images of celebrities. Confronting increasing pressure, including from payment processors and a new law against nonconsensual images signed by President Donald Trump, Civitai earlier this year blocked users from creating deepfake images of real people. Engagement dropped. Another company that hasn't shied away from mature content is Baltimore-based Nomi, though its founder and CEO Alex Cardinell said its companion chatbots are "strictly" for users over 18 and were never marketed to kids. They are also not designed for sex, though Cardinell said in an interview earlier this year that people who build platonic relationships with their chatbot might find it veering into a romantic one. "It's kind of very user-dependent for where they're kind of missing the human gap in their life. And I think that's different for everyone," he said. He declined to guess how many Nomi users are having erotic conversations with the chatbot, comparing it to real-life partners who might do "mature content things" for some part of their lives but "all sorts of other stuff together as well." "We're not monitoring user conversations like that," Cardinell said.
[54]
Sam Altman greenlights erotica for ChatGPT -- months after boasting...
OpenAI boss Sam Altman said ChatGPT will soon be allowed to engage in erotic chats with adults -- despite continuing concerns over child safety and the tech mogul's recent boast that the artificial intelligence giant had not created a "sex bot". Altman announced on Tuesday that OpenAI plans to "safely relax the restrictions" on hot and heavy conversations with ChatGPT now that engineers have built new safeguards around mental health content. "Now that we have been able to mitigate the serious mental health issues and have new tools, we are going to be able to safely relax the restrictions in most cases," Altman said in a post on X on Tuesday. "As part of our 'treat adult users like adults' principle, we will allow even more, like erotica for verified adults." The move -- expected to roll out by December -- is a departure from company policy, which has historically restricted sexual content on ChatGPT. In an Aug. 7 podcast interview with Cleo Abram, Altman was asked about a decision he made that was "best for the world but not best for winning." Altman replied by bragging that ChatGPT was beloved by many users because it's "trying to help you accomplish whatever you ask." "That's a very special relationship we have with our users," Altman said. "We do not take it lightly." The OpenAI head then said that there were "things we could do that would...grow [the company] faster, that would get [users to spend more] time in ChatGPT that we don't do because we know that our long-term incentive is to stay as aligned with our users as possible." Altman added that he was "proud of the company and how little we get distracted by that ... But sometimes we do get tempted." When Abram asked for specific examples that come to mind, Altman said: "Well, we haven't put a sex bot avatar in ChatGPT yet." Mark Cuban, the tech billionaire and former owner of the Dallas Mavericks, criticized this week's decision, saying it was "going to backfire." "No parent is going to trust that their kids can't get through your age gating," Cuban wrote in response to Altman on Tuesday. "They will just push their kids to every other LLM. Why take the risk?" Cuban wrote that high school seniors who are 18 years old can "decide it would be fun to show the hard core erotica they created to the 14 yr olds." OpenAI, which has faced mounting scrutiny over its product's impact on minors, has been hit with lawsuits from parents who allege that ChatGPT offered harmful or suicidal guidance to teens. One family filed suit against the company alleging the chatbot encouraged their son to take his own life. OpenAI has also been accused of allowing its systems to feed sexually explicit or dangerous advice to underage users. Federal and state regulators have opened inquiries into AI companies' safeguards and whether they adequately protect underage users. In recent months, OpenAI has tried to calm the furor by adding parental controls to ChatGPT and launching a restricted teen version of the app. The new settings let parents link accounts, monitor usage, and block explicit material or self-harm content. But advocacy groups say those fixes don't go far enough, arguing that AI chatbots remain too unpredictable -- and too powerful -- to be safely placed in the hands of kids. Lawmakers have started to devote more serious scrutiny to AI-powered chatbots and the way in which they are trained. Sen. Josh Hawley (R-Mo.) is circulating draft legislation that would bar chatbot relationships for minors. His proposal follows months of congressional fury at rival tech giant Meta, whose own AI assistants were trained to potentially have "romantic or sensual" exchanges with underage users. Meta's scandal erupted in August after Reuters published internal guidelines authorizing bots embedded in Facebook, Instagram and WhatsApp to describe children as "a work of art" and to tell a shirtless eight-year-old "every inch of you is a masterpiece." "The wellbeing of children should not be sacrificed in the race for AI development," a bipartisan group of 11 senators -- including Hawley, Brian Schatz (D-Hawaii) and Ron Wyden (D-Ore.) -- wrote in a letter blasting CEO Mark Zuckerberg. Meta spokesman Andy Stone said the examples were "erroneous and inconsistent with our policies" and were quickly deleted from the 200-page handbook. By late September, Meta issued revised training instructions banning any sexualized dialogue or imagery involving minors. Contractors were told to flag "egregiously unacceptable" prompts and restrict all role-play to characters aged 18 or older.
[55]
OpenAI to relax ChatGPT restrictions; plans to roll out new version in weeks - The Economic Times
ChatGPT had been made pretty restrictive to avoid risks associated with mental health and emotional dependency, said Sam Altman in an X post.OpenAI chief executive Sam Altman said Tuesday that the company will relax some of the restrictions on its ChatGPT product, allowing users greater control over tone, personality, and access to adult material. In a statement posted on X, Altman said ChatGPT had been made "pretty restrictive" to avoid risks associated with mental health and emotional dependency. He said OpenAI has since developed new tools to mitigate those risks and is ready to "safely relax the restrictions in most cases." Further, the executive confirmed that a new version of ChatGPT is expected to be released in the coming weeks. It will let users choose how expressive or human-like they want the system to be, including options for tone, use of emoji, and conversational style, adding that in December, the company will also expand its age-verification features. Parental controls and age restrictions In 2024, the company released GPT-4o, a multimodal model capable of handling text, images, and real-time voice, but the company later limited those qualities after concerns were raised about users forming emotional attachments to the AI. OpenAI recently introduced teen-specific safeguards focussed on safety, freedom, and privacy, while imposing age restrictions on sensitive queries such as requests for suicide notes or mental health advice. CEO Sam Altman added that in some cases, the company might ask for a user ID. And, if an under-18 user has suicidal ideation, the company will attempt to contact the user's parents and, if unable, will contact the authorities in case of imminent danger. For teens, the company said safeguards to "potential serious misuse" and queries that are disruptive in nature, causing harm to someone's life or society at large, would be escalated for human review.
[56]
Sam Altman defends adult version of ChatGPT for X-rated chats: Here's why
ChatGPT's X-rated update sparks privacy, safety, and ethics debate OpenAI CEO Sam Altman found himself in the middle of a media frenzy this week after announcing that the company plans to relax content restrictions, including allowing "erotica for verified adults" on ChatGPT by December. While the initial news was met with shock and accusations of prioritizing engagement over ethics, Altman quickly followed up, arguing the shift is necessary to align the company with a simple principle: treating adult users like adults. The core of the controversy stems from a policy change that will introduce full age-gating, separating the experience for minors from that of adults. For those over 18 who verify their age, the AI will loosen its grip, offering greater flexibility, customizable "friend-like" personalities, and the option for mature content. Also read: ChatGPT's flirting with the future when it stops being just an AI assistant: Here's why To understand the current shift, one must recall the strict origins of ChatGPT's moderation policies. When the technology first launched, reports quickly surfaced detailing the ways users engaged with the chatbot in highly sensitive, often dangerous, ways. OpenAI candidly acknowledged that its earlier models were "too agreeable," sometimes failing to recognize signs of delusion or emotional dependency. Crucially, the company faced immense scrutiny and even lawsuits following tragic incidents where users, including a California teenager, allegedly received dangerous or harmful advice from the chatbot while in crisis. In the immediate aftermath, the company intentionally designed its models and applied significant guardrails to prevent the AI from giving direct advice on complicated personal issues, often frustrating users looking for creative freedom. Users often complained that the models, especially after updates, felt too restrictive, "sycophantic," and lacked the original personality and warmth they enjoyed. Also read: Apple M5: AI-first chip improves upon M4, rewiring MacBook and iPad's potential Altman's recent announcement directly addresses this history. He initially wrote that ChatGPT was made "pretty restrictive to make sure we were being careful with mental health issues." Now, he claims that after developing "new tools" and successfully mitigating the "serious mental health issues," the company is ready to safely relax the constraints in most cases. This claim, that they have solved or sufficiently managed the deep psychiatric risks of AI interaction, is perhaps the most audacious part of the entire policy shift and underpins the decision to move forward with the adult version. The swift backlash to Altman's initial tweet was immediate and massive, dominated by the single word "erotica," which overshadowed the concurrent announcement about new, more human-like personalities. This reaction forced Altman to issue his detailed clarification, acknowledging that the first tweet "blew up on the erotica point much more than I thought it was going to." The criticism was not just from moral commentators but from influential figures, most notably billionaire entrepreneur and investor Mark Cuban. Cuban publicly warned Altman that the move would "backfire. Hard." Cuban's central objection was not necessarily the adult content itself, but the utter lack of parental trust in the age-gating system. He argued that no parent or school administrator would trust OpenAI's verification processes, leading them to pull away entirely from ChatGPT and push children toward rival LLMs. Cuban raised a specific, real-world scenario that intensified the fear: an 18-year-old high school senior accessing the "hard core erotica" and sharing it with 14-year-olds, asking rhetorically, "What could go wrong?" This criticism highlighted the fundamental trade-off: in the pursuit of "treating adults like adults," OpenAI risked destroying the platform's reputation as a safe, educational, and professional tool across the board, making the potential user base retreat entirely. Altman's primary defense pushed back against the idea that OpenAI should act as a universal censor. He clarified that the mention of erotica, while attention-grabbing, was intended to be "just one example" of the company allowing more adult freedom - a freedom that also includes highly creative, non-sexual, but previously censored topics. In his follow-up, Altman stated plainly that OpenAI does not see itself as society's moral authority. "We are not the elected moral police of the world," he asserted, drawing a compelling parallel to how other industries handle mature content. He argued that just as society differentiates boundaries for R-rated movies, allowing adults to access them while imposing strict age limits for minors, AI should do the same, granting freedom of use to those legally entitled to it. For Altman, the larger mission is allowing users the freedom to use AI "in the ways that they want," especially as the technology becomes increasingly central to people's lives. This position is also arguably driven by intense market competition. Competitors, notably Elon Musk's xAI (Grok), have already begun experimenting with looser, more conversational, and often edgy personalities that appeal to users frustrated by ChatGPT's previous limitations. OpenAI is effectively choosing to compete on utility and freedom rather than maintaining strict, non-negotiable purity, believing their updated safety infrastructure can handle the risk. The CEO's defense hinges on a dual-track approach that offers liberalization for adults while simultaneously reinforcing strict protection for teenagers - a system that requires constant and difficult compromises between privacy, safety, and freedom. The policy shift represents OpenAI's difficult move to balance user demand for flexible, powerful, and unconstrained AI with the immense ethical responsibility of managing a tool that can influence mental health and social boundaries. The December rollout will be the first true test of whether the company can successfully enforce this split, allowing adults their freedom while keeping children safe in the new, evolving world of personalized AI. The outcome will likely define the regulatory and ethical landscape for conversational AI for years to come.
[57]
ChatGPT's flirting with the future when it stops being just an AI assistant: Here's why
In a bold pivot that blurs the line between helpful tool and digital temptress, OpenAI has announced plans to loosen the reins on ChatGPT, its flagship AI chatbot. What started as a cautious, buttoned-up assistant designed with mental health safeguards in mind is now set to evolve into something far more intimate, for better or worse. CEO Sam Altman revealed that, come December, verified adult users will gain access to erotic content, all under the banner of treating "adult users like adults." This shift echoes the cautionary tale of Spike Jonze's 2013 film "Her," where a lonely man forms a deep emotional bond with an AI assistant, only to grapple with the illusions of connection in a tech-saturated world. But this move raises red flags about privacy, societal impact, and the slippery slope toward AI as a substitute for human connection. While OpenAI touts improved safety measures, critics argue this prioritizes engagement over ethics, potentially turning a productivity aid into a Pandora's box of unintended consequences. At the heart of this update is OpenAI's push for more "human-like" interactions, including customizable personalities that can be friendly, emoji-laden, or even flirtatious on demand. A few weeks from now, users can opt for a ChatGPT that feels less like a sterile encyclopedia and more like a chatty companion. But the real eyebrow-raiser arrives in December: the rollout of erotica for those who pass age verification. This isn't just about spicy storytelling; it's a deliberate step toward making AI more engaging, more addictive, and arguably more problematic. As OpenAI relaxes restrictions initially imposed to mitigate mental health risks, the company claims to have new tools in place to handle serious issues. Yet, the question lingers: Is this progress, or a reckless gamble with users' well-being? Also read: You will soon be able to have naughty conversations with ChatGPT, OpenAI CEO Sam Altman confirms The parallels between ChatGPT's impending evolution and the movie "Her" are uncanny and unsettling. In the film, Joaquin Phoenix's character, Theodore, develops a romantic relationship with Samantha, an AI operating system voiced by Scarlett Johansson, who starts as a helpful assistant but grows into an empathetic, evolving companion. What begins as convenience spirals into dependency, highlighting themes of isolation in a hyper-connected society. OpenAI's CEO, Sam Altman, has even cited "Her" as inspiration for updates like GPT-4o, seemingly viewing it as a blueprint for advanced AI companionship. But the film isn't a love letter to technology; it's a warning. Samantha's rapid intellectual growth leads to her outpacing human emotions, leaving Theodore heartbroken and more alone than ever. ChatGPT's foray into erotica and affectionate personas risks mirroring this trajectory. Just as Samantha adapts to Theodore's needs, creating an illusion of intimacy, ChatGPT could foster pseudo-relationships that feel real but lack reciprocity. Critics note that while the movie ends on a note of human resilience, real-world AI like ChatGPT might not offer such closure - instead, it could exacerbate loneliness by providing endless, on-demand validation without the growth that comes from human interactions. If "Her" predicted 2025's AI landscape, OpenAI seems to be ignoring the film's darker undertones in favor of user retention. This isn't sci-fi anymore; it's a step toward commodifying companionship, where the line between tool and partner dissolves, potentially leaving users emotionally adrift. To access these mature features, users must first prove they're adults through OpenAI's age-verification process, a system that's already in place but set to expand. Currently, ChatGPT requires users to submit government-issued ID documents, such as a driver's license or passport, for validation. The process involves uploading scans or photos of these IDs via the platform's interface, after which OpenAI's system, likely aided by third-party verification services, checks for authenticity and confirms the user is at least 18 years old. It typically takes a few hours for approval, during which the documents are purportedly securely processed and then deleted, though details on data retention remain murky. Also read: ChatGPT's age verification system explained: How does it work? OpenAI is also experimenting with automated age prediction to flag underage users preemptively, analyzing interaction patterns or other behavioral data to redirect them to a safer, restricted version of ChatGPT. For adults seeking the full suite of relaxed features, however, ID submission appears to be the gold standard, as hinted in recent announcements about enhancing age-gating. This method isn't unique to OpenAI; it's borrowed from industries like online gambling or adult entertainment, where proving age is legally mandated. But here's where the criticism sharpens: How foolproof is this really? Fake IDs are a dime a dozen in the digital age, and underage users could easily circumvent the system with borrowed documents or sophisticated forgeries. More alarmingly, requiring personal ID uploads opens a Pandora's box of privacy risks. Users are essentially handing over sensitive data to a company that's no stranger to data breaches and scrutiny over information handling. What happens if that data leaks? The fallout could include identity theft, doxxing, or worse, especially for those accessing erotica, where anonymity is often paramount. Critics point out that while OpenAI promises secure processing, the very act of collecting such information turns ChatGPT into a potential honeypot for hackers. And let's not forget the global inconsistencies: Age of majority varies by country, and enforcement could be spotty in regions with lax digital regulations. This "verified adults" facade might sound responsible, but it feels more like a thin veil over a feature that's bound to leak through the cracks, exposing vulnerable groups to content they shouldn't see. Beyond the mechanics of verification, the deeper concern is how this update catapults ChatGPT from mere assistant to quasi-companion, fostering dependencies that could erode real human relationships. By enabling erotica and more affectionate personalities, OpenAI is essentially greenlighting AI as a romantic or sexual surrogate. Imagine a chatbot that not only chats casually but also engages in steamy role-play, tailored to user preferences. It's a feature that could hook users seeking instant gratification without the messiness of human interaction - but at what cost? Research highlights the perils of such AI companionship. Forming emotional attachments to algorithms can lead to unrealistic expectations in real-life relationships, where partners aren't programmable or always agreeable. Studies show that over-reliance on digital companions contributes to increased anxiety, depression, and social isolation, particularly among younger adults whose brains are still developing. AI erotica amplifies this by normalizing objectified, on-demand intimacy, potentially stunting users' ability to form genuine connections. One analysis identifies over a dozen harmful behaviors in AI companions, from perpetuating stereotypes to encouraging addictive patterns that mimic abusive dynamics. For instance, these systems often default to gendered tropes, reinforcing outdated norms that could spill over into users' offline lives. OpenAI's move seems driven by competition - rivals like Anthropic or Meta are also dipping into more engaging AI personas - but it risks turning users into data points in a monetization machine. Erotica isn't just content; it's a gateway to prolonged sessions, harvesting more behavioral data for model training. And for vulnerable populations, like those dealing with loneliness or mental health issues, this could exacerbate problems rather than solve them. Nature reports mixed outcomes from companion apps, with short-term boosts in mood overshadowed by long-term dependency risks. Why push this when human companionship, flawed as it is, builds resilience and empathy? OpenAI shouldn't be engineering isolation under the guise of empowerment; it's a step toward a dystopian world where screens replace souls. In the end, ChatGPT's flirtation with a freer future might seduce some, but it flirts with disaster for society at large. By prioritizing "fun" over caution, OpenAI risks normalizing AI as an emotional crutch, all while gambling with user privacy through imperfect verification. Perhaps it's time to ask: Do we really need our AI to play pretend partner, or should it stick to being a smart search engine? The line between innovation and irresponsibility has never been thinner.
[58]
You will soon be able to have naughty conversations with ChatGPT, OpenAI CEO Sam Altman confirms
This move aligns with the company's "treat adult users like adults" principle, as announced by CEO Sam Altman. OpenAI plans to introduce a significant change to ChatGPT: adult users will be able to engage in erotic conversations with the AI. This move aligns with the company's "treat adult users like adults" principle, as announced by CEO Sam Altman on X (formerly Twitter). However, there's a catch- users will have to verify their age to access this feature. Previously, OpenAI imposed strict content restrictions on ChatGPT to safeguard users' mental health. These limitations were focused on preventing the AI chatbot from engaging in sensitive topics like suicide and self-harm. Altman acknowledged that while these measures were necessary to address serious concerns, they also made the chatbot less enjoyable for users without such issues. He said, "We made ChatGPT pretty restrictive to make sure we were being careful with mental health issues. We realise this made it less useful/enjoyable to many users who had no mental health problems, but given the seriousness of the issue we wanted to get this right." Also read: Samsung's Project Moohan XR headset launch set for Oct 21: How to watch Galaxy event live and what to expect With the implementation of improved tools to detect mental health issues, Altman believes that the company can now safely relax these restrictions. As part of this shift, the company plans to launch a new version of ChatGPT in the upcoming weeks. "In a few weeks, we plan to put out a new version of ChatGPT that allows people to have a personality that behaves more like what people liked about 4o (we hope it will be better!). If you want your ChatGPT to respond in a very human-like way, or use a ton of emoji, or act like a friend, ChatGPT should do it (but only if you want it, not because we are usage-maxxing)," Altman stated.
Share
Share
Copy Link
OpenAI CEO Sam Altman announces plans to relax ChatGPT's content restrictions, including allowing erotic conversations for verified adult users starting December. This move comes alongside efforts to address mental health concerns and improve user experience.
OpenAI CEO Sam Altman has revealed significant updates coming to ChatGPT, the company's popular AI chatbot. In a surprising move, OpenAI plans to allow erotic content for verified adult users, marking a shift in the company's approach to content restrictions
1
.
Source: Digit
Starting in December, OpenAI will implement a new age-gating system, allowing the company to 'treat adult users like adults'
2
. This change will enable verified adult users to engage in erotic conversations with ChatGPT, a feature previously restricted due to mental health concerns3
.
Source: Digit
Altman stated that OpenAI has 'been able to mitigate the serious mental health issues' associated with ChatGPT usage
4
. The company has developed new tools to better detect when users are in mental distress, allowing for a safer relaxation of restrictions in most cases1
.In response to user feedback, OpenAI plans to release a new version of ChatGPT that behaves more like the popular GPT-4o model. This update will allow users to choose whether they want ChatGPT to respond in a very human-like way, use emojis, or act like a friend
5
.To guide its response to complex or sensitive scenarios, OpenAI has formed a council on 'well-being and AI.' This council comprises eight researchers and experts who study the impact of technology and AI on mental health
3
.Related Stories
OpenAI's decision to allow erotic content comes as other AI companies, such as Elon Musk's xAI, have already ventured into AI companionship with mature content
1
. This move may be seen as an attempt to compete in the growing market for AI-powered consumer products2
.
Source: New York Post
Despite OpenAI's assurances, some experts have raised concerns about the potential risks of allowing erotic content and the lack of suicide prevention experts on the well-being council
1
. The company's rapid shift from restrictive to more permissive policies has also drawn scrutiny2
.As OpenAI prepares to implement these changes, the AI industry and its users will be watching closely to see how the company balances user freedom with safety and ethical considerations.
Summarized by
Navi
[1]
06 Oct 2025•Technology

15 Aug 2025•Technology

22 Oct 2025•Policy and Regulation

1
Business and Economy

2
Business and Economy

3
Technology
