17 Sources
17 Sources
[1]
OpenAI rolls out safety routing system, parental controls on ChatGPT | TechCrunch
OpenAI began testing a new safety routing system in ChatGPT over the weekend, and on Monday introduced parental controls to the chatbot - drawing mixed reactions from users. The safety features come in response to numerous incidents of certain ChatGPT models validating users' delusional thinking instead of redirecting harmful conversations. OpenAI is facing a wrongful death lawsuit tied to one such incident, after a teenage boy died by suicide after months of interactions with ChatGPT. The routing system is designed to detect emotionally sensitive conversations and automatically switch mid-chat to GPT-5-thinking, which the company sees as the best equipped model for high-stakes safety work. In particular, the GPT-5 models were trained with a new safety feature that OpenAI calls "safe completions," which allows them to answer sensitive questions in a safe way, rather than simply refusing to engage. It's a contrast from the company's previous chat models, which are designed to be agreeable and answer questions quickly. GPT-4o has come under particular scrutiny because of its overly sycophantic, agreeable nature, which has both fueled incidents of AI-induced delusions and drawn a large base of devoted users. When OpenAI rolled out GPT-5 as the default in August, many users pushed back and demanded access to GPT-4o. While many experts and users have welcomed the safety features, others have criticized what they see as an overly cautious implementation, with some users accusing OpenAI of treating adults like children in a way that degrades the quality of the service. OpenAI has suggested that getting it right will take time and has given itself a 120-day period of iteration and improvement. Nick Turley, VP and head of the ChatGPT app, acknowledged some of the "strong reactions to 4o responses" due to the implementation of the router with explanations. "Routing happens on a per-message basis; switching from the default model happens on a temporary basis," Turley posted on X. "ChatGPT will tell you which model is active when asked. This is part of a broader effort to strengthen safeguards and learn from real-world use before a wider rollout." The implementation of parental controls in ChatGPT received similar levels of praise and scorn, with some commending giving parents a way to keep tabs on their childrens' AI use, and others fearful that it opens the door to OpenAI treating adults like children. The controls let parents customize their teen's experience by setting quiet hours, turning off voice mode and memory, removing image generation, and opting out of model training. Teen accounts will also get additional content protections - like reduced graphic content and extreme beauty ideals - and a detection system that recognizes potential signs that a teen might be thinking about self-harm. "If our systems detect potential harm, a small team of specially trained people reviews the situation," per OpenAI's blog. "If there are signs of acute distress, we will contact parents by email, text message and push alert on their phone, unless they have opted out." OpenAI acknowledged that the system won't be perfect and may sometimes raise alarms when there isn't real danger, "but we think it's better to act and alert a parent so they can step in than to stay silent." The AI firm said it is also working on ways to reach law enforcement or emergency services if it detects an imminent threat to life and cannot reach a parent.
[2]
ChatGPT lets parents restrict content and features for teens now - here's how
With the controls, parents can enable or disable key features. Parents concerned about how their kids use AI chatbots like ChatGPT now have additional controls to better manage and monitor their use. On Monday, OpenAI announced an expansion of its parental controls through which you can link to your teen's ChatGPT account and customize core features, time limits, and other settings. To use ChatGPT, someone must be at least 13 years old, and users between the ages of 13 and 17 must have parental permission. Now rolling out to all ChatGPT users, the new controls are designed specifically to better protect people in that age range. Also: ChatGPT Pulse works overnight to produce personalized morning updates for you - how to try it To set up the new account links as a parent or guardian, open ChatGPT on the web, go to Settings, and then select Parental Controls. Here, you should be able to send an invitation to your teen's account. Alternatively, your teen can also send you the link. After the invitation is accepted, you can manage your teen's ChatGPT settings from your own account. The key features and options you're able to view and control include the following: Also: ChatGPT just got a new personalization hub. Not everyone is happy about it To respect your teen's privacy, you won't be able to see their conversation. However, you will be notified if the AI and trained human reviewers detect content that could pose serious risk or harm. Here, you can choose to receive notifications via email, text message, push notification, or all three. In developing the controls, OpenAI worked with advocacy groups such as Common Sense Media and policymakers such as the Attorneys General of California and Delaware. The company said that it expects to refine and further develop these controls over time. Also: How people actually use ChatGPT vs Claude - and what the differences tell us "These parental controls are a good starting point for parents in managing their teen's ChatGPT use," Robbie Torney, Senior Director of AI Programs for Common Sense Media, said in a statement. "Parental controls are just one piece of the puzzle when it comes to keeping kids and teens safe online, though--they work best when combined with ongoing conversations about responsible AI use, clear family rules about technology, and active involvement in understanding what their teen is doing online." Though generative AI can be a powerful and valuable tool, there are decided downsides. Bots like ChatGPT can feed people misleading, inaccurate, or even dangerous information. Teens can be especially vulnerable. In April, a teenage boy who had discussed his own suicide and methods with ChatGPT eventually took his own life. His parents have since filed a lawsuit against OpenAI charging that ChatGPT "neither terminated the session nor initiated any emergency protocol" despite an awareness of the teen's suicidal state. In a similar case, AI chatbot platform Character.ai is also being sued by a mother whose teenage son died by suicide after chatting with a bot that allegedly encouraged him. Also: How ChatGPT actually works (and why it's been so game-changing) In response to these teen suicides and other cases, OpenAI has been attempting to improve its parental controls and other safety nets. In August, the company announced that it would strengthen the ways that ChatGPT responds to people in distress and update how and which type of content is blocked. To address vulnerable teenagers in particular, OpenAI is also working to expand intervention to teens in crisis, direct them to professional resources, and involve a parent when necessary. In another step, the company is developing an age-prediction system that estimates a user's age based on how they use ChatGPT. If the AI determines that the person is between 13 and 18, it will switch to a teen version in which it will be trained not to talk flirtatiously or participate in a discussion about suicide. If the underage user is discussing suicidal thoughts, OpenAI will try to contact parents or authorities. Also: Is ChatGPT Plus still worth $20 when the free version offers so much - including GPT-5? Parents and guardians who want to stay abreast of OpenAI's safeguards can also now consult a new resource page. The page explains how ChatGPT works, which parental controls are accessible, and how teens can use the AI more safely and effectively.
[3]
One Tech Tip: OpenAI adds parental controls to ChatGPT for teen safety
LONDON (AP) -- OpenAI said Monday it's adding parental controls to ChatGPT that are designed to provide teen users of the popular platform with a safer and more "age-appropriate" experience. The company is taking action after AI chatbot safety for young users has hit the headlines. The technology's dangers have been recently highlighted by a number of cases in which teenagers took their lives after interacting with ChatGPT. In the United States, the Federal Trade Commission has even opened an inquiry into several tech companies about the potential harms to children and teenagers who use their AI chatbots as companions. In a blog post posted Monday, OpenAI outlined the new controls for parents. Here is a breakdown: The parental controls will be available to all users, but both parents and teens will need their own accounts to take advantage of them. To get started, a parent or guardian needs to send an email or text message to invite a teen to connect their accounts. Or a teenager can send an invite to a parent. Users can send a request by going into the settings menu and then to the "Parental controls" section. Teens can unlink their accounts at any time, but parents will be notified if they do. Once the accounts are linked, the teen account will get some built-in protections, OpenAI said. Teen accounts will "automatically get additional content protections, including reduced graphic content, viral challenges, sexual, romantic or violent role-play, and extreme beauty ideals, to help keep their experience age-appropriate," the company said. Parents can choose to turn these filters off, but teen users don't have the option. OpenAI warns that such guardrails are "not foolproof and can be bypassed if someone is intentionally trying to get around them." It advised parents to talk with their children about "healthy AI use." Parents are getting a control panel where they can adjust a range of settings as well as switch off the restrictions on sensitive content mentioned above. For example, does your teen stay up way past bedtime to use ChatGPT? Parents can set a quiet time when the chatbot can't be used. Other settings include turning off the AI's memory so conversations can't be saved and won't be used in future responses; turning off the ability to generate or edit images; turning off voice mode; and opting out of having chats used to train ChatGPT's AI models. OpenAI is also being more proactive when it comes to letting parents know that their child might be in distress. It's setting up a new notification system to inform them when something might be "seriously wrong" and a teen user might be thinking about harming themselves. A small team of specialists will review the situation and, in the rare case that there are "signs of acute distress," they'll notify parents by email, text message and push alert on their phone -- unless the parent has opted out. OpenAI said it will protect the teen's privacy by only sharing the information needed for parents or emergency responders to provide help. "No system is perfect, and we know we might sometimes raise an alarm when there isn't real danger, but we think it's better to act and alert a parent so they can step in than to stay silent," the company said. ____ Is there a tech topic that you think needs explaining? Write to us at [email protected] with your suggestions for future editions of One Tech Tip.
[4]
ChatGPT just launched Parental Controls -- here's my advice to parents
Everything to know about keeping your kids safe with ChatGPT As I write this, two of my kids are playing Roblox and another is in his room playing Fortnite before a busy Saturday of soccer tournaments. I know firsthand how fast tech changes and how quickly kids adapt to it. My 11-year-old knows his way around AI tools better than some adults, which is both impressive and a little terrifying. That's why OpenAI's new parental controls for ChatGPT feel like an important step forward. Although some parents would probably agree that these controls are long overdue. ChatGPT has been around for over three years and yet, we're only seeing these controls now. Even what OpenAI is giving us isn't great. However, they are better than nothing and should give parents like me a better way to set boundaries and feel more comfortable about how today's kids use AI. Starting today, parents can link their account to their teen's account and manage settings directly from their own dashboard. Once linked, teens get extra safeguards by default. These limitations include stricter filters on things like roleplay or challenges, less graphic content and no extreme beauty ideals. Other new features include: If a teen tries to unlink their account, parents get notified. And if ChatGPT detects signs of distress or potential self-harm, parents can be alerted by email, text, or push notification. These guardrails are a small start, but whether or not they are enough is yet to be seen. As we know, determined teens can find ways around things like this, and no AI filter is foolproof. But as someone raising kids and who tests AI for a living, I'd much rather have the option to customize how my children use these tools than feel like I have no control at all. Think of this update the same way you think about parental controls on other devices. You're not blocking everything; you're guiding the experience so it fits your family's values. My advice to parents: OpenAI says it's working on an age prediction system that will eventually apply teen-appropriate settings automatically. Until then, these parental controls are the best way to ensure your kids have a safe, age-appropriate experience with ChatGPT. For me, Parental Controls are about making sure my kids feel safe talking to me about how they're using AI, while still giving them room to explore. What are your thoughts about the new Parental Controls for ChatGPT? Let me know in the comments. Follow Tom's Guide on Google News and add us as a preferred source to get our up-to-date news, analysis, and reviews in your feeds. Make sure to click the Follow button!
[5]
ChatGPT is getting parental controls starting today - here's what they do and how to set them up
OpenAI is rolling out its long-awaited ChatGPT parental controls feature, starting today. The age limit for ChatGPT is 13-years old, but there has never been a way for parents to control how their children used ChatGPT until now. Using the new controls, parents and teens can link their accounts to get stronger safeguards including new tools to adjust features and set time limits for children. In a post on X.com, OpenAI outlines exactly what the new features are. Firstly, you'll be able to "reduce sensitive content", which refers to graphic content and viral challenges, and is turned on by default when a teen account is connected. Parents will also be able to control ChatGPT's memory, deciding if it will be able to remember past chats for more personalized responses. They'll also get the ability to set quiet hours, which will enable them to set times when their teen cannot use ChatGPT. Parental controls can turn off access to the ChatGPT Voice model and image creation. Finally, parents will also be able to decide if their teens chats can be used by OpenAI to improve its future models, or not. Parents will not be able to impose parental restrictions on their teens without some consent. To link two ChatGPT accounts together, a parent or teen must send an invitation in parental controls, and the other party needs to accept it. OpenAI will notify parents if their teens disconnect their account at any point. In any case, parents will not have access to their teen's chats. Some elements of the chat can be sent to parents, but that will only happen in rare cases where OpenAI and its trained reviewers detect that there are possible signs of a serious safety risk. While the new parental controls may not go far enough for some people, a statement from OpenAI reads: "We've worked closely with experts, advocacy groups, and policymakers to help inform our approach - we expect to refine and expand on these controls over time." Once your account has been upgraded, the new Parental Controls option will sit under Accounts in the Settings menu. A series of sliders will be available to manage your teen's options: The new parental controls feature arrives just as OpenAI has implemented a new safety routing system in ChatGPT for all users. Posting on X.com, Head of ChatGPT Nick Turley wrote: "As we previously mentioned, when conversations touch on sensitive and emotional topics the system may switch mid-chat to a reasoning model or GPT-5 designed to handle these contexts with extra care". It seems, however, that the system is currently triggered very easily and overreacts to any potentially sensitive content, sparking a furious response from ChatGPT users who objected to being switched to what they consider as an inferior model, even if they are paying for a ChatGPT Plus account. One user on Reddit described how he told ChatGPT that his plant had been knocked over in a storm and it responded to him by saying: "Just breathe. It's going to be okay. You're safe now." We would expect the model switching system to improve over time. These new features are part of OpenAI's general push towards stronger safeguards, which it described in a September 2 blog post called "Building more helpful ChatGPT experiences for everyone" after several highly publicized controversies from users in crisis while using the AI chatbot. The new ChatGPT parental controls are rolling out to all ChatGPT users today on the web, and will be on mobile soon. OpenAI has created a new resources page to help parents understand how the new controls will work best for them and their children.
[6]
ChatGPT to alert parents if children discuss suicide
ChatGPT has introduced a new tool to alert parents if their children try to discuss suicide or self-harm with its AI chatbot. OpenAI, the company behind the technology, is rolling out tougher parental controls amid claims its technology has contributed to some children taking their own lives. This means that parents will now be able to link their own ChatGPT account to a child's, allowing the chatbot to pass on alerts if it detects signs of potential self-harm. An OpenAI spokesman said: "If our systems detect potential harm, a small team of specially trained people reviews the situation. "If there are signs of acute distress, we will contact parents by email, text message and push alert on their phone." The Silicon Valley business stated that it was also developing a system to alert emergency services if it detects a threat to life. The safeguards have been introduced as part of a fleet of new parental settings, including tools to limit when children can access the technology. Parents will also be able to limit a child's ability to create images and remove potentially graphic, violent or sexualised content. Adults will have to choose to turn on the safety controls, which children will not be allowed to turn off themselves. It comes amid growing concerns that AI chatbots are failing to protect children by engaging in explicit conversations about topics such as suicide or self-harm. In August, the parents of a US teenager sued OpenAI over claims the chatbot had encouraged their child to take his own life after he started using it for schoolwork. According to the lawsuit, 16-year-old Adam Raine discussed self-harm and suicide with ChatGPT for months before his death in April.
[7]
ChatGPT rolls out parental controls after teen's death
OpenAI is bringing in parental controls for ChatGPT, after it was sued by the parents of a teenager who died by suicide after allegedly being coached by the chatbot. The controls will let parents and teens link accounts, decide whether ChatGPT remembers past chats, and limit teens' exposure to sensitive content, the company said Monday. OpenAI said it will also try to notify parents when it detects "signs of acute distress" in a child. "If our systems detect potential harm, a small team of specially trained people reviews the situation," the company said. The chatbot maker added that its system isn't perfect and "might sometimes raise an alarm when there isn't real danger." The rollout comes after allegations by a California family that ChatGPT played a role in their son's death. The lawsuit, filed in August by Matt and Maria Raine, accuses OpenAI and its chief executive, Sam Altman, of negligence and wrongful death. They allege that the version of ChatGPT at that time, known as 4o, was "rushed to market ... despite clear safety issues". Their son, Adam, died in April, after what their lawyer, Jay Edelson, called "months of encouragement from ChatGPT." Court filings revealed conversations he allegedly had with the chatbot where he disclosed suicidal thoughts. The family allege Adam received responses that reinforced his "most harmful and self-destructive" ideas. [Editor's note: The national suicide and crisis lifeline is available by calling or texting 988, or visiting 988lifeline.org.] ChatGPT's Monday blog post said parents will be allowed to set quiet hours so as to block access at certain times of day. They will also be able to disable voice mode, and stop the app from generating images. Parents will not be able to access their children's chat transcripts. "We've worked closely with experts, advocacy groups, and policymakers to help inform our approach -- we expect to refine and expand on these controls over time," the company said in a statement. Robbie Torney, senior director of AI Programs at nonprofit Common Sense Media, said the controls are "a good starting point," but added they are ""just one piece of the puzzle" on online safety. "They work best when combined with ongoing conversations about responsible AI use, clear family rules about technology, and active involvement in understanding what their teen is doing online," he said. Two weeks ago, Altman said the company will at some point try to detect underage ChatGPT users. "If there is doubt, we'll play it safe and default to the under-18 experience," Altman said. "In some cases or countries we may also ask for an ID; we know this is a privacy compromise for adults but believe it is a worthy tradeoff." Researchers have documented how easy it is to circumvent limits set by chatbot companies. Age-verification rules are also known to be easily bypassed. -- Niamh Rowe and Hannah Parker contributed to this article.
[8]
ChatGPT's parental controls sound great, but there's one problem
Parental controls are currently rolling out for the web version, and will soon arrive on mobile devices. Over the past few months, OpenAI has received backlash over a series of events where conversations with ChatGPT resulted in a human tragedy. In the wake of the incidents and the looming threat of AI chatbot regulations, OpenAI has finally rolled out the promised parental controls that would allow parents to ensure that their wards' interactions are safe. What's the big shift? The first line of defense is content safety. OpenAI says parents will have the power to "add extra safeguards to help protect teens, such as graphic content and viral challenges." Additionally, they can also decide to turn on or off the chatbot's memory, which allows it to remember previous conversations. Next, parents will have the flexibility to specify a usage window by setting "quiet hours" in which ChatGPT can't be accessed. Finally, guardians can choose to disable the conversational voice mode, and even shut off image creation and editing. Recommended Videos Notably, parents won't be able to see the conversations of teenage users. But in rare cases where the system detects "signs of serious safety risk," they will be warned about it. OpenAI has also launched a full-fledged resource page with instructions and guidelines to set parental controls. The open-ended problem In order to enable the aforementioned safety controls, parents must link their accounts to their teenage children's ChatGPT accounts. This can be done by sending an invite link via email or text message. Once the link is accepted by the teen user to connect the accounts, guardians can set up the safety controls mentioned above. But do keep in mind that the whole process is opt-in, and teen users can choose to unlink their accounts and end the supervision at any time. Parents will be notified about the unliking step, notes OpenAI. The bigger concern is that young users can easily circumvent these limitations by setting up another account to use ChatGPT without any restrictions. Moreover, if their parents monitor the user accounts registered on a phone, teen users can simply log in with a burner account and use the web version.
[9]
ChatGPT rolls out new parental controls
OpenAI CEO Sam Altman during the Microsoft Build conference in 2024.Jason Redmond / AFP - Getty Images file OpenAI, the company behind ChatGPT, said Monday that parents can now link their accounts designed for minors between 13 and 17 years old. The new chatbot accounts for minors will limit answers related to graphic content, romantic and sexual roleplay, viral challenges and "extreme beauty ideals," OpenAI said. Parents will also have the option to set blackout hours where their teenager can't use ChatGPT, to block it from creating images and to opt their child out from AI model training. OpenAI uses most people's conversations with ChatGPT as training data to refine the chatbot. The company will also alert parents if their teenager's account indicates that they are thinking of harming themselves, it said. The new controls come as OpenAI has faced pressure around child safety concerns. OpenAI announced the new safety measures earlier this month, in the wake of a family's lawsuit against it, alleging that ChatGPT encouraged their son to die by suicide. The announcement came the morning of a scheduled Senate Judiciary Committee hearing on the potential harms of AI. While a logged-in teenager whose family has opted into the controls will see the new restrictions, ChatGPT does not require a person to sign in or provide their age to ask a question or engage with the chatbot. OpenAI's chatbots are not designed for children 12 and younger, though there are no technical restrictions that keep someone that young from using them. "Guardrails help, but they're not foolproof and can be bypassed if someone is intentionally trying to get around them. We will continue to thoughtfully iterate and improve over time. We recommend parents talk with their teens about healthy AI use and what that looks like for their family," OpenAI said in its announcement. The company said it's also building an age-prediction system that will automatically try to determine if a person is underage and "proactively" restrict more sensitive answers, though such a system is months away, it said. OpenAI has also said it may eventually require users to upload their ID to prove their age, but did not give an update on that initiative on Monday. In a Sept 16 blog post announcing the changes, OpenAI CEO Sam Altman said that a chatbot for teenagers should not flirt and should censor discussion of suicide, but that a version for adults should be more open. ChatGPT "by default should not provide instructions about how to commit suicide, but if an adult user is asking for help writing a fictional story that depicts a suicide, the model should help with that request." "'Treat our adult users like adults' is how we talk about this internally, extending freedom as far as possible without causing harm or undermining anyone else's freedom," Altman said.
[10]
OpenAI releases parental controls for ChatGPT: Here's how to use it
OpenAI has added long-awaited parental controls to ChatGPT that are designed to provide teen users of the popular platform with a safer and more "age-appropriate" experience, the company said on Monday. It comes after AI chatbot safety for young users has hit the headlines. The technology's dangers have been recently highlighted by several cases in which teenagers took their own lives after interacting with ChatGPT. In a blog post posted Monday, OpenAI outlined the new controls for parents. Here is a breakdown. The parental controls will be available to all users, but both parents and teens will need their own accounts to take advantage of them. To get started, a parent or guardian needs to send an email or text message to invite a teen to connect their accounts, or a teenager can send an invite to a parent. Users can send a request by going into the settings menu and then to the "Parental controls" section. Teens can unlink their accounts at any time, but parents will be notified if they do. Once the accounts are linked, the teen account will get some built-in protections, OpenAI said. Teen accounts will "automatically get additional content protections, including reduced graphic content, viral challenges, sexual, romantic or violent role-play, and extreme beauty ideals, to help keep their experience age-appropriate," the company said. Parents can choose to turn these filters off, but teen users don't have the option. OpenAI warns that such guardrails are "not foolproof and can be bypassed if someone is intentionally trying to get around them". It advised parents to talk with their children about "healthy AI use". Parents are getting a control panel where they can adjust a range of settings as well as switch off the restrictions on sensitive content mentioned above. For example, does your teen stay up way past bedtime to use ChatGPT? Parents can set a quiet time when the chatbot can't be used. Other settings include turning off the AI's memory so conversations can't be saved and won't be used in future responses; turning off the ability to generate or edit images; turning off voice mode; and opting out of having chats used to train ChatGPT's AI models. OpenAI is also being more proactive when it comes to letting parents know that their child might be in distress. It's setting up a new notification system to inform them when something might be "seriously wrong" and a teen user might be thinking about harming themselves. A small team of specialists will review the situation and, in the rare case that there are "signs of acute distress," they'll notify parents by email, text message, and push alert on their phone -- unless the parent has opted out. OpenAI said it will protect the teen's privacy by only sharing the information needed for parents or emergency responders to provide help. "No system is perfect, and we know we might sometimes raise an alarm when there isn't real danger, but we think it's better to act and alert a parent so they can step in than to stay silent," the company said.
[11]
One Tech Tip: OpenAI adds parental controls to ChatGPT for teen safety
LONDON (AP) -- OpenAI said Monday it's adding parental controls to ChatGPT that are designed to provide teen users of the popular platform with a safer and more "age-appropriate" experience. The company is taking action after AI chatbot safety for young users has hit the headlines. The technology's dangers have been recently highlighted by a number of cases in which teenagers took their lives after interacting with ChatGPT. In the United States, the Federal Trade Commission has even opened an inquiry into several tech companies about the potential harms to children and teenagers who use their AI chatbots as companions. In a blog post posted Monday, OpenAI outlined the new controls for parents. Here is a breakdown: Getting started The parental controls will be available to all users, but both parents and teens will need their own accounts to take advantage of them. To get started, a parent or guardian needs to send an email or text message to invite a teen to connect their accounts. Or a teenager can send an invite to a parent. Users can send a request by going into the settings menu and then to the "Parental controls" section. Teens can unlink their accounts at any time, but parents will be notified if they do. Automatic safeguards Once the accounts are linked, the teen account will get some built-in protections, OpenAI said. Teen accounts will "automatically get additional content protections, including reduced graphic content, viral challenges, sexual, romantic or violent role-play, and extreme beauty ideals, to help keep their experience age-appropriate," the company said. Parents can choose to turn these filters off, but teen users don't have the option. OpenAI warns that such guardrails are "not foolproof and can be bypassed if someone is intentionally trying to get around them." It advised parents to talk with their children about "healthy AI use." Adjusting settings Parents are getting a control panel where they can adjust a range of settings as well as switch off the restrictions on sensitive content mentioned above. For example, does your teen stay up way past bedtime to use ChatGPT? Parents can set a quiet time when the chatbot can't be used. Other settings include turning off the AI's memory so conversations can't be saved and won't be used in future responses; turning off the ability to generate or edit images; turning off voice mode; and opting out of having chats used to train ChatGPT's AI models. Get notified OpenAI is also being more proactive when it comes to letting parents know that their child might be in distress. It's setting up a new notification system to inform them when something might be "seriously wrong" and a teen user might be thinking about harming themselves. A small team of specialists will review the situation and, in the rare case that there are "signs of acute distress," they'll notify parents by email, text message and push alert on their phone -- unless the parent has opted out. OpenAI said it will protect the teen's privacy by only sharing the information needed for parents or emergency responders to provide help. "No system is perfect, and we know we might sometimes raise an alarm when there isn't real danger, but we think it's better to act and alert a parent so they can step in than to stay silent," the company said. ____ Is there a tech topic that you think needs explaining? Write to us at [email protected] with your suggestions for future editions of One Tech Tip.
[12]
One Tech Tip: OpenAI adds parental controls to ChatGPT for teen safety
LONDON -- OpenAI said Monday it's adding parental controls to ChatGPT that are designed to provide teen users of the popular platform with a safer and more "age-appropriate" experience. The company is taking action after AI chatbot safety for young users has hit the headlines. The technology's dangers have been recently highlighted by a number of cases in which teenagers took their lives after interacting with ChatGPT. In the United States, the Federal Trade Commission has even opened an inquiry into several tech companies about the potential harms to children and teenagers who use their AI chatbots as companions. In a blog post posted Monday, OpenAI outlined the new controls for parents. Here is a breakdown: The parental controls will be available to all users, but both parents and teens will need their own accounts to take advantage of them. To get started, a parent or guardian needs to send an email or text message to invite a teen to connect their accounts. Or a teenager can send an invite to a parent. Users can send a request by going into the settings menu and then to the "Parental controls" section. Teens can unlink their accounts at any time, but parents will be notified if they do. Once the accounts are linked, the teen account will get some built-in protections, OpenAI said. Teen accounts will "automatically get additional content protections, including reduced graphic content, viral challenges, sexual, romantic or violent role-play, and extreme beauty ideals, to help keep their experience age-appropriate," the company said. Parents can choose to turn these filters off, but teen users don't have the option. OpenAI warns that such guardrails are "not foolproof and can be bypassed if someone is intentionally trying to get around them." It advised parents to talk with their children about "healthy AI use." Parents are getting a control panel where they can adjust a range of settings as well as switch off the restrictions on sensitive content mentioned above. For example, does your teen stay up way past bedtime to use ChatGPT? Parents can set a quiet time when the chatbot can't be used. Other settings include turning off the AI's memory so conversations can't be saved and won't be used in future responses; turning off the ability to generate or edit images; turning off voice mode; and opting out of having chats used to train ChatGPT's AI models. OpenAI is also being more proactive when it comes to letting parents know that their child might be in distress. It's setting up a new notification system to inform them when something might be "seriously wrong" and a teen user might be thinking about harming themselves. A small team of specialists will review the situation and, in the rare case that there are "signs of acute distress," they'll notify parents by email, text message and push alert on their phone -- unless the parent has opted out. OpenAI said it will protect the teen's privacy by only sharing the information needed for parents or emergency responders to provide help. "No system is perfect, and we know we might sometimes raise an alarm when there isn't real danger, but we think it's better to act and alert a parent so they can step in than to stay silent," the company said. ____ Is there a tech topic that you think needs explaining? Write to us at [email protected] with your suggestions for future editions of One Tech Tip.
[13]
One Tech Tip: OpenAI Adds Parental Controls to ChatGPT for Teen Safety
LONDON (AP) -- OpenAI said Monday it's adding parental controls to ChatGPT that are designed to provide teen users of the popular platform with a safer and more "age-appropriate" experience. The company is taking action after AI chatbot safety for young users has hit the headlines. The technology's dangers have been recently highlighted by a number of cases in which teenagers took their lives after interacting with ChatGPT. In the United States, the Federal Trade Commission has even opened an inquiry into several tech companies about the potential harms to children and teenagers who use their AI chatbots as companions. In a blog post posted Monday, OpenAI outlined the new controls for parents. Here is a breakdown: Getting started The parental controls will be available to all users, but both parents and teens will need their own accounts to take advantage of them. To get started, a parent or guardian needs to send an email or text message to invite a teen to connect their accounts. Or a teenager can send an invite to a parent. Users can send a request by going into the settings menu and then to the "Parental controls" section. Teens can unlink their accounts at any time, but parents will be notified if they do. Automatic safeguards Once the accounts are linked, the teen account will get some built-in protections, OpenAI said. Teen accounts will "automatically get additional content protections, including reduced graphic content, viral challenges, sexual, romantic or violent role-play, and extreme beauty ideals, to help keep their experience age-appropriate," the company said. Parents can choose to turn these filters off, but teen users don't have the option. OpenAI warns that such guardrails are "not foolproof and can be bypassed if someone is intentionally trying to get around them." It advised parents to talk with their children about "healthy AI use." Adjusting settings Parents are getting a control panel where they can adjust a range of settings as well as switch off the restrictions on sensitive content mentioned above. For example, does your teen stay up way past bedtime to use ChatGPT? Parents can set a quiet time when the chatbot can't be used. Other settings include turning off the AI's memory so conversations can't be saved and won't be used in future responses; turning off the ability to generate or edit images; turning off voice mode; and opting out of having chats used to train ChatGPT's AI models. Get notified OpenAI is also being more proactive when it comes to letting parents know that their child might be in distress. It's setting up a new notification system to inform them when something might be "seriously wrong" and a teen user might be thinking about harming themselves. A small team of specialists will review the situation and, in the rare case that there are "signs of acute distress," they'll notify parents by email, text message and push alert on their phone -- unless the parent has opted out. OpenAI said it will protect the teen's privacy by only sharing the information needed for parents or emergency responders to provide help. "No system is perfect, and we know we might sometimes raise an alarm when there isn't real danger, but we think it's better to act and alert a parent so they can step in than to stay silent," the company said. ____ Is there a tech topic that you think needs explaining? Write to us at [email protected] with your suggestions for future editions of One Tech Tip.
[14]
ChatGPT rolls out new restrictions after criticisms over teen suicides
OpenAI has introduced new parental controls for ChatGPT, enabling parents to link accounts and manage settings such as quiet hours and content filters for their teens. These updates include stronger safeguards against graphic content and a notification system for self-harm. A new parent resource page also offers guidance for responsible AI use. OpenAI has introduced parental controls and a new parent resource page for all ChatGPT users, giving families more tools to guide how teens use the platform. The update, available from today, allows parents to link their account with their teen's account and manage settings for a safer, age-appropriate experience. Once linked, parents can set limits such as quiet hours, turn off voice mode, disable image generation, stop memory from being used, and opt out of model training. Teens cannot make these changes on their own, though parents can adjust settings as needed. In addition to controls, teen accounts linked to parents will automatically receive stronger safeguards, including reduced exposure to graphic content, viral challenges, romantic or violent roleplay, and extreme beauty ideals. Parents can choose to turn these protections off, but teens cannot. A notification system has also been added to alert parents if ChatGPT detects signs that a teen may be thinking of self-harm. Trained staff will review such cases, and if there are signs of acute distress, parents will be contacted through email, text, or phone alerts. In rare instances, law enforcement or emergency services may be involved if there is an imminent threat. "These parental controls are a good starting point for parents in managing their teen's ChatGPT use. Parental controls are just one piece of the puzzle when it comes to keeping kids teens safe online, though -- they work best when combined with ongoing conversations about responsible AI use, clear family rules about technology, and active involvement in understanding what their teen is doing online," Robbie Torney, Senior Director of AI Programs at Common Sense Media, said. Alongside the controls, OpenAI has launched a parent resource page with information on how ChatGPT works, guides for setting up controls, and suggestions for positive use in schoolwork, creativity, and family activities. The company said it plans to expand and update the resource page with expert advice and conversation tips over time.
[15]
OpenAI Lets Parents Track Kids' ChatGPT Activity | PYMNTS.com
By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions. The artificial intelligence startup's parental controls now let parents connect their accounts with their teen's account, according to a Monday (Sept. 29) company blog post. The announcement came weeks after OpenAI said it was developing new child safety measures for its AI chatbot, including an age verification system. "Teens are growing up with AI, and it's on us to make sure ChatGPT meets them where they are," OpenAI said in a Sept. 16 blog post. "The way ChatGPT responds to a 15-year-old should look different than the way it responds to an adult." If OpenAI is not confident about a user's age or has incomplete information, it will "default to the under-18 experience," the post said. These new measures came weeks after a lawsuit from the parents of a teen who died by suicide after conversations with ChatGPT in which the chatbot allegedly encouraged the boy's actions. The Federal Trade Commission, meanwhile, said it wants to study how AI can affect children's mental health and safety. The FTC announced earlier this month that it is issuing orders to OpenAI and six other providers of AI chatbots seeking information on how those companies measure and monitor potentially harmful impacts of their technology on young users. The other companies include Google, Character.AI, Instagram, Meta, Snap and xAI. "AI chatbots may use generative artificial intelligence technology to simulate human-like communication and interpersonal relationships with users," the FTC said in a Sept. 11 news release. "AI chatbots can effectively mimic human characteristics, emotions and intentions, and generally are designed to communicate like a friend or confidant, which may prompt some users, especially children and teens, to trust and form relationships with chatbots." Also Monday, OpenAI shared some of the patterns it has seen from users attempting to share or generate child sexual abuse material (CSAM) and child sexual exploitation material (CSEM). "In some cases, we encounter users attempting to coax the model into engaging in fictional sexual roleplay scenarios while uploading CSAM as part of the narrative," the company wrote in a blog post. "We have also seen users attempt to coax the model into writing fictional stories where minors are put in sexually inappropriate and/or abusive situations -- which is a violation of our child safety policies, and we take swift action to detect these attempts and ban the associated accounts."
[16]
OpenAI adds parental controls to ChatGPT for teen safety
LONDON -- OpenAI said Monday it's adding parental controls to ChatGPT that are designed to provide teen users of the popular platform with a safer and more "age-appropriate" experience. The company is taking action after AI chatbot safety for young users has hit the headlines. The technology's dangers have been recently highlighted by a number of cases in which teenagers took their lives after interacting with ChatGPT. In the United States, the Federal Trade Commission has even opened an inquiry into several tech companies about the potential harms to children and teenagers who use their AI chatbots as companions. In a blog post posted Monday, OpenAI outlined the new controls for parents. Here is a breakdown: The parental controls will be available to all users, but both parents and teens will need their own accounts to take advantage of them. To get started, a parent or guardian needs to send an email or text message to invite a teen to connect their accounts. Or a teenager can send an invite to a parent. Users can send a request by going into the settings menu and then to the "Parental controls" section. Teens can unlink their accounts at any time, but parents will be notified if they do. Once the accounts are linked, the teen account will get some built-in protections, OpenAI said. Teen accounts will "automatically get additional content protections, including reduced graphic content, viral challenges, sexual, romantic or violent role-play, and extreme beauty ideals, to help keep their experience age-appropriate," the company said. Parents can choose to turn these filters off, but teen users don't have the option. OpenAI warns that such guardrails are "not foolproof and can be bypassed if someone is intentionally trying to get around them." It advised parents to talk with their children about "healthy AI use." Parents are getting a control panel where they can adjust a range of settings as well as switch off the restrictions on sensitive content mentioned above. For example, does your teen stay up way past bedtime to use ChatGPT? Parents can set a quiet time when the chatbot can't be used. Other settings include turning off the AI's memory so conversations can't be saved and won't be used in future responses; turning off the ability to generate or edit images; turning off voice mode; and opting out of having chats used to train ChatGPT's AI models. OpenAI is also being more proactive when it comes to letting parents know that their child might be in distress. It's setting up a new notification system to inform them when something might be "seriously wrong" and a teen user might be thinking about harming themselves. A small team of specialists will review the situation and, in the rare case that there are "signs of acute distress," they'll notify parents by email, text message and push alert on their phone -- unless the parent has opted out. OpenAI said it will protect the teen's privacy by only sharing the information needed for parents or emergency responders to provide help. "No system is perfect, and we know we might sometimes raise an alarm when there isn't real danger, but we think it's better to act and alert a parent so they can step in than to stay silent," the company said.
[17]
OpenAI to bring parental controls in ChatGPT after California teen's suicide
(Reuters) -OpenAI is rolling out parental controls for ChatGPT on the web and mobile, following a lawsuit by the parents of a teen who died by suicide after the artificial intelligence startup's chatbot allegedly coached him on methods of self-harm. The company said on Monday the controls will allow parents and teens to link accounts for stronger safeguards for teenagers. U.S. regulators are increasingly scrutinizing AI companies over the potential negative impacts of chatbots. In August, Reuters had reported how Meta's AI rules allowed flirty conversations with kids. Under the new measures, parents will be able to reduce exposure to sensitive content, control whether ChatGPT remembers past chats, and decide if conversations can be used to train OpenAI's models, the Microsoft-backed company said on X. Parents will also be allowed to set quiet hours that block access during certain times and disable voice mode as well as image generation and editing, OpenAI said. However, parents will not have access to a teen's chat transcripts, the company added. In rare cases where systems and trained reviewers detect signs of a serious safety risk, parents may be notified with only the information needed to support the teen's safety, OpenAI said. Meta had also announced new teenager safeguards to its AI products last month. The company said it will train systems to avoid flirty conversations and discussions of self-harm or suicide with minors and temporarily restrict access to certain AI characters. (Reporting by Jaspreet Singh in Bengaluru; Editing by Leroy Leo)
Share
Share
Copy Link
OpenAI rolls out new parental controls and safety routing system for ChatGPT, aiming to protect teen users and address concerns about AI-induced harm. The move draws mixed reactions from users and experts.
OpenAI has rolled out a new set of parental controls for ChatGPT, aimed at providing a safer and more age-appropriate experience for teenage users. The controls, which became available on Monday, allow parents to link their accounts with their teens' and manage various settings
1
2
.The new controls offer several customization options for parents:
3
Alongside parental controls, OpenAI has introduced a new safety routing system designed to detect emotionally sensitive conversations. When such topics are identified, the system automatically switches to GPT-5-thinking, which is considered better equipped for high-stakes safety work
1
.In cases where the AI system detects potential signs of serious distress or self-harm, a small team of trained specialists will review the situation. If acute distress is identified, parents will be notified via email, text message, and push alerts, unless they have opted out of this feature
2
5
.Related Stories
The introduction of these safety features has drawn mixed reactions from users and experts. While many have welcomed the increased protection for young users, others have criticized what they perceive as an overly cautious approach that may degrade the quality of the service
1
.OpenAI acknowledges that the system isn't perfect and may sometimes raise alarms unnecessarily. However, the company believes it's better to err on the side of caution. Nick Turley, VP and head of the ChatGPT app, stated that the company is committed to refining and expanding these controls over time
1
5
.These new features come in response to recent incidents involving AI-induced harm, including a wrongful death lawsuit filed against OpenAI after a teenage boy died by suicide following interactions with ChatGPT
1
. The company is also developing an age-prediction system to automatically apply teen-appropriate settings based on user behavior2
.As AI continues to play an increasingly significant role in our lives, the implementation of these safety measures represents an important step in addressing concerns about AI ethics and the protection of vulnerable users. The effectiveness of these controls and their impact on user experience will likely be closely monitored in the coming months.
Summarized by
Navi
30 Aug 2025β’Technology
26 Mar 2025β’Technology
05 Sept 2025β’Health