5 Sources
5 Sources
[1]
ChatGPT users may face ID checks under new OpenAI safeguards
CEO Sam Altman confirmed in a blog post that OpenAI is "prioritizing safety ahead of privacy and freedom for teens." He said the system will send under-18 users into a restricted version of ChatGPT, which blocks sexual content and adds other safeguards. "In some cases or countries we may also ask for an ID," Altman wrote. "We know this is a privacy compromise for adults but believe it is a worthy tradeoff." OpenAI said the system will default to the safer option when age cannot be confirmed. The company plans to let parents link accounts to monitor usage, disable features like chat history, and enforce blackout hours. Parents will also get notifications if the AI detects signs of acute distress. In emergencies, OpenAI warned, "we may involve law enforcement as a next step." The company says parental oversight will arrive by the end of September. Teens as young as 13 will be able to use a limited ChatGPT, while under-13 users remain barred. The rollout comes as researchers raise doubts about whether AI can reliably predict age from text. A 2024 Georgia Tech study achieved 96 percent accuracy in lab conditions. But performance dropped to 54 percent when classifying narrower age groups, and failed entirely for some users.
[2]
ChatGPT will guess if you're a teen and start acting like a chaperone
Parents will have new tools to link accounts, set usage limits, and receive alerts about their teens' mental state OpenAI is making ChatGPT act like a bouncer at a club, estimating your age before deciding to let you in. The AI won't be using your (possibly made-up) birthdate or ID, but how you interact with the chatbot. If the system suspects you're under 18, it will automatically shift you into a more restricted version of the chatbot designed specifically to protect teenagers from inappropriate content. And if it's unsure, it's going to err on the side of caution. If you want the adult version of ChatGPT back, you might have to prove you're old enough to buy a lottery ticket. The idea that generative AI shouldn't treat everyone the same is certainly understandable. Especially with teens increasingly using AI, OpenAI has to consider the unique set of risks involved. The teen-specific ChatGPT experience will limit discussions of topics like sexual content and offer more delicate handling of topics like depression and self-harm. And while adults can still talk about those topics in context, teen users will see far more "Sorry, I can't help with that" messages when wading into sensitive areas. To figure out your age, ChatGPT will comb through your conversation and look for patterns that indicate age, specifically that someone is under 18. ChatGPT's guesses of your age might come from the types of questions you ask, your writing style, how you respond to being corrected, or even which emoji you prefer. If you set off its adolescent alarm bells, into the age-appropriate mode you go. You might be 27 and asking about career change anxiety, but if you type like a moody high schooler, you might get told to talk to your parents about your spiraling worries. OpenAI has admitted there might be mistakes, as "even the most advanced systems will sometimes struggle to predict age." In those cases, they'll default to the safer mode and offer ways for adults to prove their age and regain access to the adult version of ChatGPT. This new age-prediction system is the centerpiece of OpenAI's next phase of teen-safety improvements. There will also be new parental controls coming later this month. These tools will let parents link their own accounts with their kids', limit access during certain hours, and receive alerts if the system detects what it calls "acute distress." Depending on how serious the situation seems and whether parents can't be reached, OpenAI may even contact law enforcement agencies based on the conversation. Making ChatGPT a teen guidance counselor through built-in content filters is a notable shift on its own. Doing so without the user opting in is an even bigger swing since it means the AI not only decides how old you are, but how your experience should differ from an adult's ChatGPT conversation. So if ChatGPT starts getting more cautious or oddly sensitive, you should check to see if you've suddenly been tagged as a teen. You might just have a creative or youthful writing style, but you'll still need to prove you're legally an adult if you want to have edgier discussions. Maybe just talk about your back hurting for no reason or how music isn't as good as it used to be to convince the AI of your aged credentials.
[3]
ChatGPT could ask for ID, says OpenAI chief
It's also rolling out parental controls and an automated age-prediction system. OpenAI recently talked about introducing parental controls for ChatGPT before the end of this month. The company behind ChatGPT has also revealed it's developing an automated age-prediction system designed to work out if a user is under 18, after which it will offer an age-appropriate experience with the popular AI-powered chatbot. Recommended Videos If, in some cases, the system is unable to predict a user's age, OpenAI could ask for ID so that it can offer the most suitable experience. The plan was shared this week in a post by OpenAI CEO Sam Altman, who noted that ChatGPT is intended for people 13 years and older. Altman said that a user's age will be predicted based on how people use ChatGPT. "If there is doubt, we'll play it safe and default to the under-18 experience," the CEO said. "In some cases or countries, we may also ask for an ID; we know this is a privacy compromise for adults but believe it is a worthy tradeoff." Altman said he wanted users to engage with ChatGPT in the way they want, "within very broad bounds of safety." Elaborating on the issue, the CEO noted that the default version of ChatGPT is not particularly flirtatious, but said that if a user asks for such behavior, the chatbot will respond accordingly. Altman also said that the default version should not provide instructions on how someone can take their own life, but added that if an adult user is asking for help writing a fictional story that depicts a suicide, then "the model should help with that request." "'Treat our adult users like adults' is how we talk about this internally; extending freedom as far as possible without causing harm or undermining anyone else's freedom," Altman wrote. But he said that in cases where the user is identified as being under 18, flirtatious talk and also comments about suicide will be excluded across the board. Altman added if a user who is under 18 expresses suicidal thoughts to ChatGPT, "we will attempt to contact the users' parents and if unable, will contact the authorities in case of imminent harm." OpenAI's move toward parental controls and age verification follows a high-profile lawsuit filed against the company by a family alleging that ChatGPT acted as a "suicide coach" and contributed to the suicide of their teenage son, Adam Raine, who reportedly received detailed advice about suicide methods over many interactions with OpenAI's chatbot. It also comes amid growing scrutiny by the public and regulators over the risks AI chatbots pose to vulnerable minors in areas such as mental health harms and exposure to inappropriate content.
[4]
No Flirting or Talk of Suicide: ChatGPT Gets New Rules for Minors
In the race to develop AI, humanity is creating the technology and its guardrails at the same time - essentially building the plane while flying it This week, one of the metaphorical pilots - ChatGPT maker OpenAI - announced new guardrails for the highly popular chatbot to improve safety for teens. "We prioritize safety ahead of privacy and freedom for teens," OpenAI CEO Sam Altman said in a blog post. "This is a new and powerful technology, and we believe minors need significant protection." ChatGPT will use technology to determine whether a user is over 18. "If there is doubt, we'll play it safe and default to the under-18 experience," said Altman, who underlined that OpenAI's creation is for people over 13. Just what is "the under-18 experience," you ask? Well, if an adult requests "flirtatious talk," then "they should get it," Altman wrote. If an adult asks for instructions for how to commit suicide, ChatGPT should not provide them - but it can help to write a fictional story depicting a suicide. For teens (or adults it cannot verify), "ChatGPT will be trained not to do the above-mentioned flirtatious talk if asked, or engage in discussions about suicide or self-harm even in a creative writing setting," Altman wrote. "And, if an under-18 user is having suicidal ideation, we will attempt to contact the user's parents and if unable, will contact the authorities in case of imminent harm." Separately, the company announced parental controls will be in place by the end of the month that let adults "guide" how ChatGPT responds to their teen, disable features like chat history, set blackout hours when the minor cannot use the system and receive notifications when the system detects the child is in "acute distress." I noted back in August that the family of 16-year-old Adam Raine was taking legal action against OpenAI after he killed himself following what their lawyer described as "months of encouragement" to do so from ChatGPT. "The teenager discussed a method of suicide with ChatGPT on several occasions, including shortly before taking his own life," The Guardian reported last month. "According to the filing in the superior court of the state of California for the county of San Francisco, ChatGPT guided him on whether his method of taking his own life would work. "It also offered to help him write a suicide note to his parents." That's just one of three high-profile cases brought in the past year by parents accusing AI chatbots of helping lead a minor to suicide. As a GenX-er, I was raised on stories about the dangers of AI, from the deadly betrayal of the Nostromo's crew by the duplicitous Ash in "Alien" to the world-ending Skynet in the "Terminator" movies. And everyone in my cohort knows "open the pod bay doors, HAL." So I was not especially surprised by the Gallup poll from earlier this month, which found 41% of Americans say they don't trust businesses much on AI responsibility, and 28% say they don't trust them at all. But that distrust appears to be eroding. In 2023, when Gallup first asked the question, 21% said they had some or a lot of trust that businesses would use AI responsibly. In 2025, that number was up to 31%. Fewer Americans say AI will do more harm than good, 31% now vs. 40% in 2023. Greater familiarity with AI - which I mostly use to create odd backgrounds in my video meetings at work - seems to be making it more popular.
[5]
ChatGPT Can Now Call the Cops : Sam Altman
What if your AI assistant could do more than just answer questions or help with tasks, what if it could actively intervene in critical situations, even contacting law enforcement when necessary? This isn't a hypothetical anymore. OpenAI's latest updates to ChatGPT have introduced capabilities that could redefine how we think about artificial intelligence in our daily lives. From safeguarding minors to addressing potential emergencies, ChatGPT is stepping into a role that feels less like a tool and more like a partner in making sure safety and accountability. But with great power comes great responsibility, and this evolution raises profound questions about the ethical boundaries of AI intervention. Below AI Explained takes you through how OpenAI's updates are reshaping the role of AI in personal and societal safety, with ChatGPT now capable of taking unprecedented actions like alerting authorities in extreme cases. You'll discover how these advancements aim to protect vulnerable groups, enhance privacy, and address ethical dilemmas, all while navigating the fine line between innovation and overreach. As AI becomes more integrated into our lives, its ability to make judgment calls, like when to involve law enforcement, forces us to confront the complexities of trust, autonomy, and responsibility. What does it mean for an AI to act on our behalf, and are we ready for this shift? One of the most significant updates centers on safeguarding minors. ChatGPT now incorporates mechanisms to identify users under the age of 18 and prevent inappropriate interactions, such as flirting or other unsuitable behavior. In cases where conversations raise serious concerns, the system may flag the interaction for parental review or, in extreme situations, notify law enforcement authorities. This proactive approach aims to create a safer digital environment for younger users. To further enhance child safety, OpenAI has introduced parental controls. These controls allow parents to set specific restrictions, such as blackout hours, which limit when teens can access the platform. By balancing oversight with privacy, these measures aim to empower parents while making sure that minors can use AI responsibly. OpenAI's focus on child safety reflects a broader commitment to protecting vulnerable populations in the digital age. OpenAI is taking significant steps to prioritize user privacy. The company advocates for AI conversations to be protected under standards similar to doctor-patient or lawyer-client confidentiality, making sure that sensitive interactions remain secure. This approach underscores the importance of safeguarding user data as AI becomes a central tool in personal and professional communication. However, these heightened privacy standards could pose challenges for smaller AI developers and open source initiatives. The resources required to implement robust privacy protections may create barriers to entry, potentially reshaping the competitive landscape. OpenAI's stance highlights the need for industry-wide collaboration to establish privacy standards that are both effective and equitable. As AI continues to evolve, making sure confidentiality will remain a cornerstone of ethical AI development. Below are more guides on AI safety from our extensive range of articles. Recent data on ChatGPT usage reveals the platform's versatility across various applications. While education, health advice, and translation are among the most common uses, coding, a highly publicized application of AI, accounts for a smaller portion of overall usage. This trend suggests that AI systems like ChatGPT are increasingly being adopted by non-technical users for practical, everyday tasks. The growing adoption of AI for diverse purposes highlights its potential to reshape industries and influence user behavior. From assisting students with homework to providing language support for travelers, ChatGPT demonstrates how AI can simplify complex tasks and make technology more accessible. These insights emphasize the importance of designing AI systems that cater to a broad range of needs, making sure that their benefits are widely distributed. As AI capabilities expand, ethical and regulatory challenges are becoming more pressing. OpenAI faces critical questions about how it will handle flagged conversations, particularly when requests come from foreign governments. Balancing the need for user safety with concerns about overreach or misuse of power is a complex issue that requires careful consideration. The push for stronger privacy regulations also raises concerns about equitable access to AI innovation. Smaller developers may struggle to meet stringent requirements, potentially leading to a concentration of power among larger organizations. These challenges highlight the need for balanced regulatory frameworks that promote safety and fairness without stifling competition or innovation. Addressing these issues will be crucial as AI continues to integrate into society. The impact of AI on the workforce remains a significant concern. OpenAI CEO Sam Altman has acknowledged the potential for widespread job displacement due to AI advancements, though he predicts the transition will occur gradually. While AI promises to increase efficiency and productivity, it also raises important questions about the future of work. Policymakers and industry leaders must address these challenges by investing in reskilling initiatives and creating pathways for workers to adapt to the changing labor market. By preparing for the shifts brought about by automation, society can ensure that the benefits of AI are shared equitably. The focus should be on fostering a workforce that is equipped to thrive in an AI-driven economy. Despite its advancements, ChatGPT and similar AI systems still face notable technical limitations. Issues such as hallucinations, where the AI generates false or misleading information, and forced outputs remain significant challenges. These limitations can undermine user trust and hinder the broader adoption of AI technologies. OpenAI is actively researching solutions to improve the reliability and accuracy of AI-generated content. By addressing these technical challenges, the company aims to enhance the practical applications of AI across various fields. Making sure that AI systems are both reliable and transparent will be essential for maintaining public confidence and expanding their utility. AI technology continues to evolve at a rapid pace, with significant progress in areas such as coding, software development, and natural language processing. These advancements empower both technical and non-technical users, allowing them to tackle complex tasks more efficiently. For example, ChatGPT's ability to assist with programming or provide detailed explanations of technical concepts has made it a valuable tool for professionals and hobbyists alike. However, the rapid pace of innovation raises important questions about the long-term implications of AI on industries, education, and society. As AI capabilities grow, it becomes increasingly important to consider how these changes will shape the future. OpenAI's commitment to responsible development serves as a reminder that innovation must be guided by ethical considerations and a focus on societal well-being. OpenAI's recent updates reflect a broader effort to balance innovation with safety and ethical considerations. By addressing child safety, enhancing privacy protections, and advocating for responsible AI development, OpenAI is setting a precedent for the industry. These measures aim to ensure that AI serves as a force for good while minimizing risks and unintended consequences. As AI systems like ChatGPT become more sophisticated and widely adopted, the importance of maintaining a balance between technological progress and ethical responsibility cannot be overstated. OpenAI's approach demonstrates that it is possible to innovate while prioritizing safety, privacy, and fairness. This balance will be critical in shaping a future where AI enhances human potential without compromising fundamental values. As a user, your role in this evolving AI landscape is pivotal. By staying informed about these developments and engaging with AI responsibly, you contribute to shaping a future where technology and ethics coexist harmoniously. The choices you make today will influence how AI integrates into society tomorrow. Whether through advocating for ethical practices, supporting responsible innovation, or simply using AI thoughtfully, your actions play a crucial part in determining the trajectory of this fantastic technology.
Share
Share
Copy Link
OpenAI introduces new safeguards for ChatGPT, including age verification and parental controls, to protect minors and enhance user safety. The move raises questions about privacy and the evolving role of AI in society.
OpenAI, the company behind ChatGPT, is implementing a new age verification system to enhance safety measures for its users, particularly teenagers. CEO Sam Altman confirmed that the company is 'prioritizing safety ahead of privacy and freedom for teens'
1
. The system will attempt to determine if a user is under 18 based on their interaction patterns with the chatbot2
.Source: TechRadar
If ChatGPT suspects a user is under 18, it will automatically shift them into a more restricted version of the chatbot designed to protect teenagers from inappropriate content
2
. In cases where age cannot be confirmed, the system will default to the safer option. Altman stated, 'In some cases or countries we may also ask for an ID,' acknowledging that this is a privacy compromise for adults but deemed a worthy trade-off3
.The teen-specific ChatGPT experience will limit discussions of sensitive topics such as sexual content and offer more delicate handling of subjects like depression and self-harm
2
. For users identified as under 18, flirtatious talk and comments about suicide will be excluded across the board4
.Source: Digital Trends
OpenAI is rolling out new parental controls by the end of September. These tools will allow parents to:
1
2
In extreme cases where a user expresses suicidal thoughts, ChatGPT will attempt to contact the user's parents. If unable to reach them, OpenAI may involve law enforcement as a next step
5
.Source: Interesting Engineering
Related Stories
The implementation of these safety measures raises important questions about privacy, user autonomy, and the evolving role of AI in society. While the intent is to protect vulnerable users, concerns have been raised about potential overreach and the implications of AI systems making decisions about when to involve authorities
5
.OpenAI's move comes in the wake of high-profile lawsuits, including one alleging that ChatGPT acted as a 'suicide coach' for a teenage user
3
. This highlights the urgent need for robust safety measures in AI systems, especially those accessible to minors.These developments may set a precedent for the AI industry, potentially influencing how other companies approach user safety and age verification. However, the resources required to implement such robust privacy protections could pose challenges for smaller AI developers and open-source initiatives
5
.As AI becomes more integrated into daily life, balancing innovation with safety and ethical considerations will remain a critical challenge for developers, policymakers, and users alike.
Summarized by
Navi
[1]
[3]
[4]
[5]