2 Sources
[1]
Research reveals bias toward additive advice in mental health support
University of BathAug 21 2025 From "try yoga" to "start journaling," most mental health advice piles on extra tasks. Rarely does it tell you to stop doing something harmful. New research from the University of Bath and University of Hong Kong shows that this "additive advice bias" appears everywhere: in conversations between people, posts on social media, and even recommendations from AI chatbots. The result? Well-intentioned tips that may leave people feeling more overwhelmed than helped. With mental health problems rising worldwide and services under strain, friends, family, online communities and AI are often the first port of call. Understanding how we advise each other could be key to making that support more effective. A collection of eight studies involving hundreds of participants, published in Communications Psychology, analysed experimental data, real-world Reddit advice, and tested ChatGPT's responses. Participants advised strangers, friends, and themselves on scenarios involving both harmful habits, like gambling and missing beneficial activities, such as exercise. Key findings: Additive dominates - Across every context, people suggested adding activities far more than removing harmful activities. Feasibility and benefit - Doing more was seen to be easier and more beneficial than cutting harmful things out. Advice varies by relationship - cutting harmful things out is viewed as easier for our close friends than for ourselves. AI mirrors human bias - ChatGPT gave predominantly additive advice, reflecting patterns in online social media. In theory, good advice should balance doing more with doing less. But we found a consistent tilt towards piling more onto people's plates and even AI has learned to do it. While well-meaning, it can unintentionally make mental health feel like an endless list of chores." Dr. Tom Barry, Senior Author, Department of Psychology, University of Bath, England Co-author, Dr. Nadia Adelina from the Department of Psychology at the University of Hong Kong, Hong Kong said: "As AI chatbots become a major source of mental health guidance, they risk amplifying this bias. Building in prompts to explore what people might remove from their lives could make advice more balanced and less overwhelming." This research was supported by the Research Promotion Fund of the Department of Psychology, University of Bath, England. University of Bath Journal reference: Barry, T. J., & Adelina, N. (2025). People overlook subtractive solutions to mental health problems. Communications Psychology. doi.org/10.1038/s44271-025-00312-8.
[2]
Why mental health advice often adds to your to-do list
From "try yoga" to "start journaling," most mental health advice piles on extra tasks. Rarely does it tell you to stop doing something harmful. New research from the University of Bath and University of Hong Kong shows that this "additive advice bias" appears everywhere: in conversations between people, posts on social media, and even recommendations from AI chatbots. The result? Well-intentioned tips that may leave people feeling more overwhelmed than helped. With mental health problems rising worldwide and services under strain, friends, family, online communities and AI are often the first port of call. Understanding how we advise each other could be key to making that support more effective. A collection of eight studies involving hundreds of participants, published in Communications Psychology, analyzed experimental data, real-world Reddit advice, and tested ChatGPT's responses. Participants advised strangers, friends, and themselves on scenarios involving harmful habits like gambling, and missing beneficial activities such as exercise. Key findings: * Additive advice dominates: Across every context, people suggested adding activities far more than removing harmful activities. * Feasibility and benefit: Doing more was seen to be easier and more beneficial than cutting harmful things out. * Advice varies by relationship: Cutting harmful things out is viewed as easier for our close friends than for ourselves. * AI mirrors human bias: ChatGPT gave predominantly additive advice, reflecting patterns in online social media. Senior author Dr. Tom Barry, from the Department of Psychology at the University of Bath, England said, "In theory, good advice should balance doing more with doing less. But we found a consistent tilt towards piling more onto people's plates and even AI has learned to do it. While well-meaning, it can unintentionally make mental health feel like an endless list of chores." Co-author, Dr. Nadia Adelina from the Department of Psychology at the University of Hong Kong, said, "As AI chatbots become a major source of mental health guidance, they risk amplifying this bias. Building in prompts to explore what people might remove from their lives could make advice more balanced and less overwhelming."
Share
Copy Link
New research from the University of Bath and University of Hong Kong uncovers a tendency to offer additive rather than subtractive advice for mental health issues, a bias that extends to AI chatbots like ChatGPT.
A groundbreaking study conducted by researchers from the University of Bath and the University of Hong Kong has uncovered a pervasive bias in mental health advice. The research, published in Communications Psychology, reveals that most mental health recommendations tend to add tasks to people's lives rather than suggesting the removal of harmful activities 12.
Source: News-Medical
The study, comprising eight separate investigations with hundreds of participants, found that this "additive advice bias" is prevalent across various contexts. Whether it's advice shared between individuals, posts on social media platforms, or even recommendations from AI chatbots, the tendency is to suggest additional activities rather than advocating for the cessation of detrimental habits 1.
Key findings from the research include:
Source: Medical Xpress
Dr. Tom Barry, the senior author from the University of Bath, emphasized the potential drawbacks of this bias. He stated, "In theory, good advice should balance doing more with doing less. But we found a consistent tilt towards piling more onto people's plates and even AI has learned to do it. While well-meaning, it can unintentionally make mental health feel like an endless list of chores" 1.
As AI chatbots increasingly become a primary source of mental health guidance, there's a risk of amplifying this bias. Dr. Nadia Adelina, co-author from the University of Hong Kong, suggested a potential solution: "Building in prompts to explore what people might remove from their lives could make advice more balanced and less overwhelming" 2.
The research utilized a diverse approach, analyzing experimental data, real-world advice from Reddit, and responses from ChatGPT. Participants were asked to provide advice in various scenarios, including harmful habits like gambling and beneficial activities such as exercise 12.
This study opens up new avenues for improving mental health support. By recognizing and addressing the additive advice bias, both human advisors and AI systems could potentially offer more balanced and effective guidance. As mental health issues continue to rise globally and services face increasing strain, understanding these biases becomes crucial in enhancing the quality of support available through various channels 12.
Summarized by
Navi
[2]
OpenAI CEO Sam Altman proposed offering ChatGPT Plus to all UK citizens in a deal potentially worth £2 billion, sparking discussions on AI accessibility and government collaboration.
4 Sources
Technology
17 hrs ago
4 Sources
Technology
17 hrs ago
Elon Musk's xAI has made Grok 2.5, an older version of its AI model, open source on Hugging Face. This move comes after recent controversies surrounding Grok's responses and aims to increase transparency in AI development.
2 Sources
Technology
1 hr ago
2 Sources
Technology
1 hr ago
NVIDIA has introduced the Jetson AGX Thor Developer Kit, a compact yet powerful mini PC designed for AI, robotics, and edge computing applications, featuring the new Jetson T5000 system-on-module based on the Blackwell architecture.
2 Sources
Technology
9 hrs ago
2 Sources
Technology
9 hrs ago
Ex Populus, the company behind Ethereum-based gaming network Xai, has filed a lawsuit against Elon Musk's AI company xAI for trademark infringement and unfair competition, citing market confusion and reputational damage.
2 Sources
Technology
9 hrs ago
2 Sources
Technology
9 hrs ago
Researchers at UVA Cancer Center highlight how AI could transform mental health support for breast cancer patients, offering personalized care and overcoming barriers to treatment access.
2 Sources
Health
3 days ago
2 Sources
Health
3 days ago