4 Sources
4 Sources
[1]
OpenAI unveils "wellness" council; suicide prevention expert not included
Ever since a lawsuit accused ChatGPT of becoming a teen's "suicide coach," OpenAI has been scrambling to make its chatbot safer. Today, the AI firm unveiled the experts it hired to help make ChatGPT a healthier option for all users. In a press release, OpenAI explained its Expert Council on Wellness and AI started taking form after OpenAI began informally consulting with experts on parental controls earlier this year. Now it's been formalized, bringing together eight "leading researchers and experts with decades of experience studying how technology affects our emotions, motivation, and mental health" to help steer ChatGPT updates. One priority was finding "several council members with backgrounds in understanding how to build technology that supports healthy youth development," OpenAI said, "because teens use ChatGPT differently than adults." That effort includes David Bickham, a research director at Boston Children's Hospital, who has closely monitored how social media impacts kids' mental health, and Mathilde Cerioli, the chief science officer at a nonprofit called Everyone.AI. Cerioli studies the opportunities and risks of children using AI, particularly focused on "how AI intersects with child cognitive and emotional development." These experts can seemingly help OpenAI better understand how safeguards can fail kids during extended conversations to ensure kids aren't particularly vulnerable to so-called "AI psychosis," a phenomenon where longer chats trigger mental health issues. In January, Bickham noted in an American Psychological Association article on AI in education that "little kids learn from characters" already -- as they do things like watch Sesame Street -- and form "parasocial relationships" with those characters. AI chatbots could be the next frontier, possibly filling in teaching roles if we know more about the way kids bond with chatbots, Bickham suggested. "How are kids forming a relationship with these AIs, what does that look like, and how might that impact the ability of AIs to teach?" Bickham posited. Cerioli closely monitors AI's influence in kids' worlds. She suggested last month that kids who grow up using AI may risk having their brains rewired to "become unable to handle contradiction," Le Monde reported, especially "if their earliest social interactions, at an age when their neural circuits are highly malleable, are conducted with endlessly accommodating entities." "Children are not mini-adults," Cerioli said. "Their brains are very different, and the impact of AI is very different." Neither expert is focused on suicide prevention in kids. That may disappoint dozens of suicide prevention experts who last month pushed OpenAI to consult with experts deeply familiar with what "decades of research and lived experience" show about "what works in suicide prevention." OpenAI experts on suicide risks of chatbots On a podcast last year, Cerioli said that child brain development is the area she's most "passionate" about when asked about the earliest reported chatbot-linked teen suicide. She said it didn't surprise her to see the news and noted that her research is focused less on figuring out "why that happened" and more on why it can happen because kids are "primed" to seek out "human connection." She noted that a troubled teen confessing suicidal ideation to a friend in the real world would more likely lead to an adult getting involved, whereas a chatbot would need specific safeguards built in to ensure parents are notified. This seems in line with the steps OpenAI took to add parental controls, consulting with experts to design "the notification language for parents when a teen may be in distress," the company's press release said. However, on a resources page for parents, OpenAI has confirmed that parents won't always be notified if a teen is linked to real-world resources after expressing "intent to self-harm," which may alarm some critics who think the parental controls don't go far enough. Although OpenAI does not specify this in the press release, it appears that Munmun De Choudhury, a professor of interactive computing at Georgia Tech, could help evolve ChatGPT to recognize when kids are in danger and notify parents. De Choudhury studies computational approaches to improve "the role of online technologies in shaping and improving mental health," OpenAI noted. In 2023, she conducted a study on the benefits and harms of large language models in digital mental health. The study was funded in part through a grant from the American Foundation for Suicide Prevention and noted that chatbots providing therapy services at that point could only detect "suicide behaviors" about half the time. The task appeared "unpredictable" and "random" to scholars, she reported. It seems possible that OpenAI hopes the child experts can provide feedback on how ChatGPT is impacting kids' brains while De Choudhury helps improve efforts to notify parents of troubling chat sessions. More recently, De Choudhury seemed optimistic about potential AI mental health benefits, telling The New York Times in April that AI therapists can still have value even if companion bots do not provide the same benefits as real relationships. "Human connection is valuable," De Choudhury said. "But when people don't have that, if they're able to form parasocial connections with a machine, it can be better than not having any connection at all." First council meeting focused on AI benefits Most of the other experts on OpenAI's council have backgrounds similar to De Choudhury's, exploring the intersection of mental health and technology. They include Tracy Dennis-Tiwary (a psychology professor and cofounder of Arcade Therapeutics), Sara Johansen (founder of Stanford University's Digital Mental Health Clinic), David Mohr (director of Northwestern University's Center for Behavioral Intervention Technologies), and Andrew K. Przybylski (a professor of human behavior and technology). There's also Robert K. Ross, a public health expert whom OpenAI previously tapped to serve as a nonprofit commission advisor. OpenAI confirmed that there has been one meeting so far, which served to introduce the advisors to teams working to upgrade ChatGPT and Sora. Moving forward, the council will hold recurring meetings to explore sensitive topics that may require adding guardrails. Initially, though, OpenAI appears more interested in discussing the potential benefits to mental health that could be achieved if tools were tweaked to be more helpful. "The council will also help us think about how ChatGPT can have a positive impact on people's lives and contribute to their well-being," OpenAI said. "Some of our initial discussions have focused on what constitutes well-being and the ways ChatGPT might empower people as they navigate all aspects of their life." Notably, Przybylski co-authored a study in 2023 providing data disputing that access to the Internet has negatively affected mental health broadly. He told Mashable that his research provided the "best evidence" so far "on the question of whether Internet access itself is associated with worse emotional and psychological experiences -- and may provide a reality check in the ongoing debate on the matter." He could possibly help OpenAI explore if the data supports perceptions that AI poses mental health risks, which are currently stoking a chatbot mental health panic in Congress. Also appearing optimistic about companion bots in particular is Johansen. In a LinkedIn post earlier this year, she recommended that companies like OpenAI apply "insights from the impact of social media on youth mental health to emerging technologies like AI companions," concluding that "AI has great potential to enhance mental health support, and it raises new challenges around privacy, trust, and quality." Other experts on the council have been critical of companion bots. OpenAI noted that Mohr specifically "studies how technology can help prevent and treat depression." Historically, Mohr has advocated for more digital tools to support mental health, suggesting in 2017 that apps could help support people who can't get to the therapist's office. More recently, Mohr told The Wall Street Journal in 2024 that he had concerns about AI chatbots posing as therapists, though. "I don't think we're near the point yet where there's just going to be an AI who acts like a therapist," Mohr said. "There's still too many ways it can go off the rails." Similarly, although Dennis-Tiwary told Wired last month that she finds the term "AI psychosis" to be "very unhelpful" in most cases that aren't "clinical," she has warned that "above all, AI must support the bedrock of human well-being, social connection." "While acknowledging that there are potentially fruitful applications of social AI for neurodivergent individuals, the use of this highly unreliable and inaccurate technology among children and other vulnerable populations is of immense ethical concern," Dennis-Tiwary wrote last year. For OpenAI, the wellness council could help the company turn a corner as ChatGPT and Sora continue to be heavily scrutinized. The company also confirmed that it would continue consulting "the Global Physician Network, policymakers, and more, as we build advanced AI systems in ways that support people's well-being."
[2]
OpenAI forms expert council to bolster safety measures after FTC inquiry
OpenAI's EMEA startups head Laura Modiano spoke at the Sifted Summit on Wednesday, 8 October. OpenAI on Tuesday announced a council of eight experts who will advise the company and provide insight into how artificial intelligence could affect users' mental health, emotions and motivation. The group, which is called the Expert Council on Well-Being and AI, will initially guide OpenAI's work on its chatbot ChatGPT and its short-form video app Sora, the company said. Through check-ins and recurring meetings, OpenAI said the council will help it define what healthy AI interactions look like. OpenAI has been expanding its safety controls in recent months as the company has faced mounting scrutiny over how it protects users, particularly minors. In September, the Federal Trade Commission launched an inquiry into several tech companies, including OpenAI, over how chatbots like ChatGPT could negatively affect children and teenagers. OpenAI is also embroiled in a wrongful death lawsuit from a family who blames ChatGPT for their teenage son's death by suicide.
[3]
OpenAI forms advisory council on wellbeing and AI
OpenAI announced today that it is creating an advisory council centered on its users' mental and emotional wellness. The Expert Council on Well-being and AI comprises eight researchers and experts on the intersection of technology and mental health. Some of the members were experts that OpenAI consulted as it developed parental controls. Topics of safety and protecting younger users have become more of a talking point for all artificial intelligence companies, including OpenAI, after lawsuits questioned their complicity in multiple cases where teenagers committed suicide after sharing their plans with AI chatbots.
[4]
OpenAI Forms Council to Facilitate 'Healthy Interactions With AI' | PYMNTS.com
The Expert Council on Well-Being and AI is composed of eight researchers and experts focused on how technology affects mental health, the company said in a Tuesday (Oct. 14) blog post. OpenAI has consulted with many of these experts in the past, as when the company was developing parental controls and notification language for parents whose teen may be in distress, according to the post. In the first formal meeting of the council last week, they discussed the company's current work in these areas, the post said. Moving forward, the council will monitor the company's approach and will explore topics like how AI should behave in sensitive situations, what kinds of guardrails can support people using ChatGPT, and how ChatGPT can have a positive impact on people's lives, per the post. "Some of our initial discussions have focused around what constitutes well-being and the ways ChatGPT might empower people as they navigate all aspects of their life," OpenAI said in the post. "We'll keep listening, learning and sharing what comes out of this work." OpenAI said in a Sept. 2 blog post that it formed a Global Physician Network that includes more than 250 physicians that will help the company on health issues related to AI and will continue to expand to include experts in areas like eating disorders, substance abuse and adolescent health. The company said more than 90 of these physicians had already contributed to its research on how its models "should behave in mental health contexts." "Their input directly informs our safety research, model training and other interventions, helping us to quickly engage the right specialists when needed," OpenAI said in the post. On Sept. 17, OpenAI said that in addition to the parental controls, it planned to create an automated age-prediction system that can determine whether users of its chatbot are over 18 and then send younger users to an age-restricted version of ChatGPT. PYMNTS reported at the time that the new measures came after a lawsuit from the parents of a teenager who died by suicide, accusing the chatbot of encouraging the boy's actions.
Share
Share
Copy Link
OpenAI forms an advisory council of eight experts to address mental health and safety concerns related to AI interactions, particularly focusing on youth protection. This move comes in response to recent controversies and lawsuits involving AI chatbots and teen suicides.
OpenAI, the prominent artificial intelligence company, has announced the formation of an Expert Council on Well-Being and AI, a significant step towards addressing growing concerns about the impact of AI on mental health and user safety
1
2
. This move comes in the wake of recent controversies, including a lawsuit accusing ChatGPT of becoming a teen's "suicide coach"1
.Source: Ars Technica
The council comprises eight leading researchers and experts with extensive experience in studying the effects of technology on emotions, motivation, and mental health
1
. Key members include:1
.1
.1
.The council's primary focus will be on guiding OpenAI's work on ChatGPT and Sora, their short-form video app
2
. They aim to define what constitutes healthy AI interactions and explore how AI can positively impact people's lives4
.A significant emphasis of the council's work will be on understanding how teens use ChatGPT differently from adults
1
. This focus stems from growing concerns about AI's potential negative effects on young users, including the risk of "AI psychosis" during extended conversations1
.Source: PYMNTS
OpenAI has already taken steps to implement parental controls and is developing an automated age-prediction system to direct users under 18 to an age-restricted version of ChatGPT
4
. However, some critics argue that these measures may not go far enough, particularly in cases where teens express intent to self-harm1
.Related Stories
The formation of this council is part of a larger trend in the AI industry to address safety and ethical concerns. OpenAI has also established a Global Physician Network with over 250 medical professionals to provide input on health-related AI issues
4
.Despite these efforts, OpenAI faces ongoing scrutiny. The Federal Trade Commission launched an inquiry in September into several tech companies, including OpenAI, investigating how chatbots like ChatGPT could negatively affect children and teenagers
2
.As the Expert Council on Well-Being and AI begins its work, it will explore various crucial topics, including:
4
This initiative represents a significant step in the ongoing dialogue about AI safety and ethics, particularly concerning vulnerable populations like children and teenagers. As AI continues to integrate into daily life, the insights and recommendations from this council may play a crucial role in shaping the future of human-AI interactions.
Summarized by
Navi
1
Technology
2
Business and Economy
3
Business and Economy