3 Sources
3 Sources
[1]
Talk to AI every day? New research says it might signal depression
Study finds daily AI chatbot use tied to higher rates of moderate depressive symptoms in U.S. adults. Spending time chatting with AI assistants like ChatGPT, Google Gemini, Microsoft Copilot, or similar systems might be more than just a tech habit. A new study published in JAMA Network Open suggests that people who use AI chatbots daily are more likely to report moderate depressive symptoms compared with those who interact with them less frequently. Researchers found a roughly 30% higher odds of at least moderate depression among regular users, though they are careful to emphasize that this link is an association, not proof that chatbot use causes depression. This finding comes from a national survey of nearly 21,000 U.S. adults conducted in 2025, where participants detailed how often they interacted with generative AI tools and completed standard mental health questionnaires. Within that group, about 10% said they used AI daily, and 5% said they engaged with chatbots multiple times throughout the day. Those daily users showed higher rates of reported depressive symptoms and other negative emotional effects, such as anxiety and irritability. What the Research Actually Shows Just to be clear, researchers behind the study stress an important nuance: the results don't prove that AI chatbot use causes depression. However, it's possible that people already experiencing depressive symptoms or loneliness may be more inclined to talk with AI frequently, rather than the chats being the trigger. Some analyses also hinted that middle-aged adults (45-64) showed particularly strong associations, though the reasons why remain unclear. That doesn't mean AI chatbots lack potential benefits. In specialized therapeutic settings, for example, CBT-based or clinically guided systems, some evidence shows reductions in depressive symptoms and loneliness when designed with safeguards and clear boundaries in place. But casual use for everyday companionship or support appears to be a different story. Mental health professionals caution that heavy reliance on AI interactions can make underlying issues harder to address with human support. Recommended Videos For now, the new research highlights a relationship worth better understanding as AI tools become more interwoven with daily life. Whether frequent AI chat reflects a coping strategy for emotional distress, a contributor to social withdrawal, or something more complex entirely, experts say people should be mindful of why they're turning to artificial voices. It's essential to not hesitate to seek human connection and professional support when needed.
[2]
Using AI for advice or other personal reasons linked to depression and anxiety
People who interact with chatbots for emotional support or other personal reasons are more likely to report symptoms of depression or anxiety, a new study finds. The researchers from Mass General Brigham surveyed 20,847 mostly white men and women in the United States about their AI usage and mental health symptoms. In the survey, published Wednesday in JAMA Network Open, 10.3% of participants reported using artificial intelligence "at least daily" and 5% reported using it "multiple times per day." Of those using an AI program at least daily, nearly half were using it for work and about 11% used it for school. Among daily users, 87.1% reported using it for personal reasons, which could include recommendations, advice or emotional support. Dr. Roy Perlis, a lead author of the study, said that most people's exposure to artificial intelligence is through chatbots. The mean age of the participants in the study was 47. Those who used chatbots daily for personal reasons were more likely to experience at least moderate depression or other feelings of anxiety and irritability, compared to people who didn't use AI. Participants were asked whether or how often in the past two weeks they had trouble concentrating, sleeping, eating or thought about hurting themselves. Common symptoms of depression include feelings of sadness, low self-esteem, lack of energy and lacking motivation. Users with ages between 45 and 64 were more likely to report depressive symptoms with AI use. Previous research has shown that some people turn to AI for emotional support and even romantic relationships. Early studies have shown that chatbots specifically designed for mental health treatment may be useful as an adjunct for therapy. Other studies analyzing general chatbots, such as OpenAI's ChatGPT, said they may be problematic to people with mental health conditions. However, the American Psychological Association advises against using AI as a replacement for therapy and psychological treatment. Perlis said the average difference in depression severity between chatbot users and nonusers was small, but warned some people may struggle more severely than others. "There's probably a subset of people where AI use is associated with no change in their mood, or even benefit in their mood," said Perlis, who serves as vice chair for research in the department of psychiatry at Mass General Brigham. "But that also means there are a subset where AI use is probably associated with worsening of their mood, and for some people, that can be substantially greater levels of depression." The researchers observed what's called a "dose response," meaning the more frequently someone used AI, the stronger their symptoms were. Using AI for work or school wasn't associated with symptoms of depression. For people who use AI for personal reasons, Perlis said it can "run the gamut" on the nature of their interactions, and AI chatbots are a way of having a "social interaction that otherwise would be difficult for them." "It's not the case at all that all AI is harmful and chatbots are harmful," said Perlis, who is also an associate editor of JAMA Network Open. "I think my concern is particularly for these sort of general-purpose chatbots. They're really not designed to take up people's social support or mental health support, and so when we use them that way, I think there's some risk." The survey has a number of limitations. It shows an association between AI and negative mental health symptoms, not cause and effect. The study also didn't identify what specific AI programs participants were using, nor did it define what personal use meant. 'A vicious cycle' It also may be likely that people who are more depressed are more likely to turn to AI programs for companionship. Dr. Jodi Halpern, co-director for the Kavli Center for Ethics, Science and the Public at UC Berkeley, noted the study doesn't show that AI causes depression. "It could go in either direction," she said. "It could be a vicious cycle, we just have no idea. So the idea that when people are more depressed, they may use AI more for personal uses is very plausible." Nicholas Jacobson, associate professor of biomedical data science, psychiatry and computer science at Dartmouth College, said people may seek out AI for therapy if they're unhappy with standard care and because it's easier to access. "There's nowhere near enough providers to go around. And folks are looking for greater support than they can access otherwise," he said. The study also found that men, younger adults, higher earners, those with higher education and those in urban settings used AI more frequently. Jacobson said it's not certain why some people may be more likely to use AI, or are more negatively affected by it than others. "We don't know enough about this," he said. "I think we need more studies to really understand why it is those groups in particular are more likely to use this, certainly." Halpern added that future research on AI should focus on its effects on people's mental health, and this study "stretches our attention to look at the people we might not have been paying attention to." Perlis' study isn't a warning call, he said, but people should take into account their AI use and whether it's helping them or not. "[People] should be mindful when they're interacting with a chatbot about how often they're doing it, what they're doing it instead of, and if they feel better or worse after they've had an extended interaction," he said.
[3]
Spending A Lot Of Time With AI Chatbots? You've A Higher Risk For Depression, Study Finds
By Dennis Thompson HealthDay ReporterTHURSDAY, Jan. 22, 2026 (HealthDay News) -- Do you find yourself spending hours chatting with AI programs like ChatGPT, Microsoft Copilot, Google Gemini, Claude or DeepSeek? Odds are you might be suffering from depression. People who use AI chatbots daily are about 30% more likely to have at least moderate levels of depression, researchers reported Jan. 21 in JAMA Network Open. "We found that daily AI use was common and significantly associated with depressive and other negative affective symptoms" like anxiety and irritability, concluded the research team led by psychiatrist Dr. Roy Perlis, director of the Center for Quantitative Health at Massachusetts General Hospital in Boston. Age also appears to play a role, with middle-aged adults having particularly higher odds of depression if they frequently use generative AI, researchers found. Regular AI users between 45 and 64 years of age had a 54% higher risk of depression, compared to 32% higher odds among those between 25 and 44, results showed. Those results indicate that "some individuals may be more apt to experience depressive symptoms associated with AI use," researchers wrote. For the new study, researchers surveyed nearly 21,000 U.S. adults between April and May 2025, using a standard mental health questionnaire to track symptoms of depression. Participants were also asked about how often they used AI. About 10% said they use generative AI daily, including more than 5% who said they turn to it multiple times a day. From the study design, it's hard to tell whether AI is promoting depression or if depressed people are turning to AI for solace, researchers said. Dr. Sunny Tang, an assistant professor of psychiatry at Northwell Health's Feinstein Institutes for Medical Research in Manhasset, New York, agreed that it's hard to tell which way the association works. "People who are already experiencing mental health symptoms may be more likely to use generative AI for personal use by seeking help and support for their symptoms, persuading loneliness, or finding validation," said Tang, who was not involved in the study. "When thinking about the link between AI and mental health, we need to think in multiple directions - could AI use negatively impact mental health? But also, how do differences and mental health change the way that we interact with AI?" said Tang, who practices at Zucker Hillside Hospital in Queens, New York. Loneliness could be an important factor, she said. "A lot of people are feeling more and more isolated these days, whether it's because they're working remotely or for other reasons," Tang said. "We know that loneliness is a really strong predictor of mental health symptoms like depression, anxiety and irritability. I think that it's definitely one of the directions where we should be looking to try to understand these relationships." The results also showed that AI companies need to produce products that take people's mental health into consideration, Tang said. "They should keep in ... the forefront the fact that people with mental illness and people with mental health symptoms are going to be actively engaging with their products," she said. "As all physicians know, first, do no harm." Tang said "better guardrails" need to be in place to make sure AI are not providing advice that makes existing mental health symptoms worse. "Companies should ask themselves, 'Is there a way to build AI so that I can be more supportive of people with mental health needs?' " Tang said. More information The National Alliance on Mental Illness has more on AI and mental health. SOURCES: JAMA Network Open, Jan. 21, 2026; Dr. Sunny Tang, assistant professor of psychiatry, Northwell Health Feinstein Institutes for Medical Research and Zucker Hillside Hospital
Share
Share
Copy Link
A national survey of nearly 21,000 U.S. adults reveals that people using AI chatbots like ChatGPT or Google Gemini daily are 30% more likely to report moderate depression. The study highlights correlation not causation, with researchers noting that loneliness and personal use for emotional support may drive the connection.
People who interact with AI chatbots daily face significantly elevated risks of mental health symptoms, according to a comprehensive study published in JAMA Network Open. The research, led by psychiatrist Dr. Roy Perlis from Mass General Brigham, surveyed 20,847 U.S. adults between April and May 2025 and found that those using generative AI tools at least once per day showed roughly 30% higher odds of experiencing at least moderate depression compared to less frequent users
1
2
. Within the survey population, approximately 10% reported daily AI use, while 5% engaged with systems like ChatGPT, Google Gemini, or Microsoft Copilot multiple times throughout the day3
.
Source: NBC
The study revealed that daily AI chatbot use for personal reasons—including seeking advice, recommendations, or emotional support—correlated with elevated symptoms of depression and anxiety. Among daily users, 87.1% reported using AI for personal reasons rather than strictly for work or school
2
. Participants completed standard mental health questionnaires assessing whether they had experienced trouble concentrating, sleeping, eating, or thoughts of self-harm in the previous two weeks. The research demonstrated what scientists call a "dose response"—the more frequently someone used AI, the stronger their mental health symptoms appeared2
. Notably, using AI for work or school wasn't associated with symptoms of depression and anxiety, suggesting the nature of interaction matters significantly.
Source: Digital Trends
Age emerged as a critical factor in the relationship between AI chatbot use and mental health symptoms. Regular AI users between 45 and 64 years of age demonstrated a 54% higher risk of depression, compared to 32% higher odds among those between 25 and 44
3
. The mean age of survey participants was 47, with the study population consisting of mostly white men and women2
. Researchers also found that men, younger adults, higher earners, those with higher education, and those in urban settings used AI more frequently, though the reasons behind these patterns remain unclear2
.Roy Perlis and his research team stress an important distinction: the findings show an association, not proof that AI chatbot use causes depression
1
. Dr. Jodi Halpern, co-director for the Kavli Center for Ethics, Science and the Public at UC Berkeley, noted that "it could go in either direction" and "could be a vicious cycle"2
. People already experiencing depressive symptoms or loneliness may be more inclined to seek companionship through AI interactions rather than the technology triggering their condition. Nicholas Jacobson, associate professor at Dartmouth College, suggested that individuals may turn to AI for therapy because standard care remains difficult to access: "There's nowhere near enough providers to go around. And folks are looking for greater support than they can access otherwise"2
.Related Stories
Loneliness appears to play a pivotal role in driving people toward AI for personal use. Dr. Sunny Tang, assistant professor of psychiatry at Northwell Health's Feinstein Institutes for Medical Research, explained that "a lot of people are feeling more and more isolated these days, whether it's because they're working remotely or for other reasons." She emphasized that loneliness serves as "a really strong predictor of mental health symptoms like depression, anxiety and irritability"
3
. For some individuals, AI chatbots provide a way of having social interaction that otherwise would be difficult for them2
. However, mental health professionals caution that heavy reliance on AI interactions can make underlying issues harder to address with human connection and professional mental health support.The findings highlight urgent questions about AI design and safety measures. Dr. Sunny Tang emphasized that AI companies must recognize that "people with mental illness and mental health symptoms are going to be actively engaging with their products" and should prioritize the principle of "first, do no harm"
3
. She called for better guardrails to ensure AI systems don't provide advice that worsens existing mental health symptoms and suggested companies ask themselves whether they can build AI to be more supportive of people with mental health needs3
. While specialized therapeutic settings using CBT-based or clinically guided systems show some evidence of reducing depressive symptoms when designed with safeguards, general-purpose chatbots lack these protections1
. The American Psychological Association advises against using AI as a replacement for therapy and psychological treatment2
. As AI tools become more interwoven with daily life, experts recommend people remain mindful of why they're turning to artificial voices and not hesitate to seek human connection when needed.Summarized by
Navi
[1]
04 Nov 2025•Health

22 Mar 2025•Science and Research

01 Jul 2025•Technology

1
Policy and Regulation

2
Technology

3
Technology
