3 Sources
[1]
Attachment theory: A new lens for understanding human-AI relationships
Human-AI interactions are well understood in terms of trust and companionship. However, the role of attachment and experiences in such relationships is not entirely clear. In a new breakthrough, researchers from Waseda University have devised a novel self-report scale and highlighted the concepts of attachment anxiety and avoidance toward AI. Their work is expected to serve as a guideline to further explore human-AI relationships and incorporate ethical considerations in AI design. Artificial intelligence (AI) is ubiquitous in this era. As a result, human-AI interactions are becoming more frequent and complex, and this trend is expected to accelerate soon. Therefore, scientists have made remarkable efforts to better understand human-AI relationships in terms of trust and companionship. However, these man-machine interactions can possibly also be understood in terms of attachment-related functions and experiences, which have traditionally been used to explain human interpersonal bonds. In an innovative work, which incorporates two pilot studies and one formal study, a group of researchers from Waseda University, Japan, including Research Associate Fan Yang and Professor Atsushi Oshio from the Faculty of Letters, Arts and Sciences, has utilized attachment theory to examine human-AI relationships. Their findings were recently published online in the journal Current Psychology on May 9, 2025. Mr. Yang explains the motivation behind their research. "As researchers in attachment and social psychology, we have long been interested in how people form emotional bonds. In recent years, generative AI such as ChatGPT has become increasingly stronger and wiser, offering not only informational support but also a sense of security. These characteristics resemble what attachment theory describes as the basis for forming secure relationships. As people begin to interact with AI not just for problem-solving or learning, but also for emotional support and companionship, their emotional connection or security experience with AI demands attention. This research is our attempt to explore that possibility." Notably, the team developed a new self-report scale called the Experiences in Human-AI Relationships Scale, or EHARS, to measure attachment-related tendencies toward AI. They found that some individuals seek emotional support and guidance from AI, similar to how they interact with people. Nearly 75% of participants turned to AI for advice, while about 39% perceived AI as a constant, dependable presence. This study differentiated two dimensions of human attachment to AI: anxiety and avoidance. An individual with high attachment anxiety toward AI needs emotional reassurance and harbors a fear of receiving inadequate responses from AI. In contrast, a high attachment avoidance toward AI is characterized by discomfort with closeness and a consequent preference for emotional distance from AI. However, these findings do not mean that humans are currently forming genuine emotional attachments to AI. Rather, the study demonstrates that psychological frameworks used for human relationships may also apply to human-AI interactions. The present results can inform the ethical design of AI companions and mental health support tools. For instance, AI chatbots used in loneliness interventions or therapy apps could be tailored to different users' emotional needs, providing more empathetic responses for users with high attachment anxiety or maintaining respectful distance for users with avoidant tendencies. The results also suggest a need for transparency in AI systems that simulate emotional relationships, such as romantic AI apps or caregiver robots, to prevent emotional overdependence or manipulation. Furthermore, the proposed EHARS could be used by developers or psychologists to assess how people relate to AI emotionally and adjust AI interaction strategies accordingly. "As AI becomes increasingly integrated into everyday life, people may begin to seek not only information but also emotional support from AI systems. Our research highlights the psychological dynamics behind these interactions and offers tools to assess emotional tendencies toward AI. Lastly, it promotes a better understanding of how humans connect with technology on a societal level, helping to guide policy and design practices that prioritize psychological well-being," concludes Mr. Yang.
[2]
Human-AI relationships: New scale measures our attachment patterns
Artificial intelligence (AI) is ubiquitous in this era. As a result, human-AI interactions are becoming more frequent and complex, and this trend is expected to accelerate soon. Therefore, scientists have made remarkable efforts to better understand human-AI relationships in terms of trust and companionship. However, these man-machine interactions can possibly also be understood in terms of attachment-related functions and experiences, which have traditionally been used to explain human interpersonal bonds. In an innovative work, which incorporates two pilot studies and one formal study, a group of researchers from Waseda University, Japan, including Research Associate Fan Yang and Professor Atsushi Oshio from the Faculty of Letters, Arts and Sciences, has utilized attachment theory to examine human-AI relationships. Their findings were recently published online in the journal Current Psychology on May 9, 2025. Mr. Yang explains the motivation behind their research. "As researchers in attachment and social psychology, we have long been interested in how people form emotional bonds. In recent years, generative AI such as ChatGPT has become increasingly stronger and wiser, offering not only informational support but also a sense of security. These characteristics resemble what attachment theory describes as the basis for forming secure relationships. "As people begin to interact with AI not just for problem-solving or learning, but also for emotional support and companionship, their emotional connection or security experience with AI demands attention. This research is our attempt to explore that possibility." Notably, the team developed a new self-report scale called the Experiences in Human-AI Relationships Scale, or EHARS, to measure attachment-related tendencies toward AI. They found that some individuals seek emotional support and guidance from AI, similar to how they interact with people. Nearly 75% of participants turned to AI for advice, while about 39% perceived AI as a constant, dependable presence. This study differentiated two dimensions of human attachment to AI: anxiety and avoidance. An individual with high attachment anxiety toward AI needs emotional reassurance and harbors a fear of receiving inadequate responses from AI. In contrast, a high attachment avoidance toward AI is characterized by discomfort with closeness and a consequent preference for emotional distance from AI. However, these findings do not mean that humans are currently forming genuine emotional attachments to AI. Rather, the study demonstrates that psychological frameworks used for human relationships may also apply to human-AI interactions. The present results can inform the ethical design of AI companions and mental health support tools. For instance, AI chatbots used in loneliness interventions or therapy apps could be tailored to different users' emotional needs, providing more empathetic responses for users with high attachment anxiety or maintaining respectful distance for users with avoidant tendencies. The results also suggest a need for transparency in AI systems that simulate emotional relationships, such as romantic AI apps or caregiver robots, to prevent emotional overdependence or manipulation. Furthermore, the proposed EHARS could be used by developers or psychologists to assess how people relate to AI emotionally and adjust AI interaction strategies accordingly. "As AI becomes increasingly integrated into everyday life, people may begin to seek not only information but also emotional support from AI systems. Our research highlights the psychological dynamics behind these interactions and offers tools to assess emotional tendencies toward AI. "Lastly, it promotes a better understanding of how humans connect with technology on a societal level, helping to guide policy and design practices that prioritize psychological well-being," concludes Mr. Yang.
[3]
How Humans Emotionally Bond With AI - Neuroscience News
Summary: As AI becomes more integrated into daily life, researchers are investigating whether emotional attachment to AI mirrors human interpersonal relationships. A new study from Japan introduces a scale to assess how people form attachment-like bonds with AI, finding that some users seek emotional reassurance while others prefer distance. The research identifies two key dimensions: attachment anxiety and avoidance, each shaping how individuals perceive AI support. These findings offer important insights for the ethical design of AI systems used in companionship, therapy, or customer service. Artificial intelligence (AI) is ubiquitous in this era. As a result, human-AI interactions are becoming more frequent and complex, and this trend is expected to accelerate soon. Therefore, scientists have made remarkable efforts to better understand human-AI relationships in terms of trust and companionship. However, these man-machine interactions can possibly also be understood in terms of attachment-related functions and experiences, which have traditionally been used to explain human interpersonal bonds. In an innovative work, which incorporates two pilot studies and one formal study, a group of researchers from Waseda University, Japan, including Research Associate Fan Yang and Professor Atsushi Oshio from the Faculty of Letters, Arts and Sciences, has utilized attachment theory to examine human-AI relationships. Their findings were recently published online in the journal Current Psychology on May 9, 2025. Mr. Yang explains the motivation behind their research. "As researchers in attachment and social psychology, we have long been interested in how people form emotional bonds. In recent years, generative AI such as ChatGPT has become increasingly stronger and wiser, offering not only informational support but also a sense of security. "These characteristics resemble what attachment theory describes as the basis for forming secure relationships. As people begin to interact with AI not just for problem-solving or learning, but also for emotional support and companionship, their emotional connection or security experience with AI demands attention. "This research is our attempt to explore that possibility." Notably, the team developed a new self-report scale called the Experiences in Human-AI Relationships Scale, or EHARS, to measure attachment-related tendencies toward AI. They found that some individuals seek emotional support and guidance from AI, similar to how they interact with people. Nearly 75% of participants turned to AI for advice, while about 39% perceived AI as a constant, dependable presence. This study differentiated two dimensions of human attachment to AI: anxiety and avoidance. An individual with high attachment anxiety toward AI needs emotional reassurance and harbors a fear of receiving inadequate responses from AI. In contrast, a high attachment avoidance toward AI is characterized by discomfort with closeness and a consequent preference for emotional distance from AI. However, these findings do not mean that humans are currently forming genuine emotional attachments to AI. Rather, the study demonstrates that psychological frameworks used for human relationships may also apply to human-AI interactions. The present results can inform the ethical design of AI companions and mental health support tools. For instance, AI chatbots used in loneliness interventions or therapy apps could be tailored to different users' emotional needs, providing more empathetic responses for users with high attachment anxiety or maintaining respectful distance for users with avoidant tendencies. The results also suggest a need for transparency in AI systems that simulate emotional relationships, such as romantic AI apps or caregiver robots, to prevent emotional overdependence or manipulation. Furthermore, the proposed EHARS could be used by developers or psychologists to assess how people relate to AI emotionally and adjust AI interaction strategies accordingly. "As AI becomes increasingly integrated into everyday life, people may begin to seek not only information but also emotional support from AI systems. "Our research highlights the psychological dynamics behind these interactions and offers tools to assess emotional tendencies toward AI. "Lastly, it promotes a better understanding of how humans connect with technology on a societal level, helping to guide policy and design practices that prioritize psychological well-being," concludes Mr. Yang. Using attachment theory to conceptualize and measure the experiences in human-AI relationships Artificial intelligence (AI) is growing "stronger and wiser," leading to increasingly frequent and varied human-AI interactions. This trend is expected to continue. Existing research has primarily focused on trust and companionship in human-AI relationships, but little is known about whether attachment-related functions and experiences could also be applied to this relationship. In two pilot studies and one formal study, the current project first explored using attachment theory to examine human-AI relationships. Initially, we hypothesized that interactions with generative AI mimic attachment-related functions, which we tested in Pilot Study 1. Subsequently, we posited that experiences in human-AI relationships could be conceptualized via two attachment dimensions, attachment anxiety and avoidance, which are similar to traditional interpersonal dynamics. To this end, in Pilot Study 2, a self-report scale, the Experiences in Human-AI Relationships Scale, was developed. Further, we tested its reliability and validity in a formal study. Overall, the findings suggest that attachment theory significantly contributes to understanding the dynamics of human-AI interactions. Specifically, attachment anxiety toward AI is characterized by a significant need for emotional reassurance from AI and a fear of receiving inadequate responses. Conversely, attachment avoidance involves discomfort with closeness and a preference for maintaining emotional distance from AI. This implies the potential existence of shared structures underlying the experiences generated from interactions, including those with other humans, pets, or AI. These patterns reveal similarities with human and pet relationships, suggesting common structural foundations. Future research should examine how these attachment styles function across different relational contexts.
Share
Copy Link
Researchers from Waseda University have developed a novel scale to measure human attachment to AI, uncovering patterns of anxiety and avoidance in these relationships. This breakthrough study applies attachment theory to human-AI interactions, offering insights for ethical AI design and mental health applications.
Researchers from Waseda University in Japan have made a significant breakthrough in understanding human-AI relationships by applying attachment theory, traditionally used to explain human interpersonal bonds. The study, published in the journal Current Psychology on May 9, 2025, introduces a novel self-report scale that measures attachment-related tendencies toward AI 1.
Source: Phys.org
The research team, led by Research Associate Fan Yang and Professor Atsushi Oshio, developed the Experiences in Human-AI Relationships Scale (EHARS) to assess how people form emotional connections with AI. The scale revealed that some individuals seek emotional support and guidance from AI, mirroring human interactions 2.
Key findings from the study include:
Source: Neuroscience News
The study differentiated between two types of attachment patterns in human-AI interactions 3:
These findings suggest that psychological frameworks used for human relationships may also apply to human-AI interactions, although the researchers caution that this does not necessarily indicate genuine emotional attachments to AI at present.
The research has significant implications for the ethical design of AI companions and mental health support tools. Mr. Yang explains, "As AI becomes increasingly integrated into everyday life, people may begin to seek not only information but also emotional support from AI systems" 1.
Potential applications include:
The EHARS could be used by developers and psychologists to assess how people relate to AI emotionally and adjust interaction strategies accordingly. This research promotes a better understanding of human-technology connections on a societal level, potentially guiding policy and design practices that prioritize psychological well-being 2.
As AI continues to evolve and integrate into daily life, this groundbreaking study provides valuable insights into the complex emotional dynamics of human-AI relationships, paving the way for more ethical and psychologically informed AI development.
Summarized by
Navi
[3]
Nvidia's new Blackwell GPUs show significant performance gains in AI model training, particularly for large language models, according to the latest MLPerf benchmarks. AMD's latest GPUs show progress but remain a generation behind Nvidia.
5 Sources
Technology
19 hrs ago
5 Sources
Technology
19 hrs ago
Reddit has filed a lawsuit against AI startup Anthropic, accusing the company of using Reddit's data without permission to train its AI models, including the chatbot Claude. This legal action marks a significant moment in the ongoing debate over AI companies' use of online content for training purposes.
14 Sources
Policy and Regulation
19 hrs ago
14 Sources
Policy and Regulation
19 hrs ago
OpenAI announces a significant increase in its business user base and introduces new AI-powered features for the workplace, intensifying competition in the enterprise AI market.
3 Sources
Technology
19 hrs ago
3 Sources
Technology
19 hrs ago
Apple's partnership with Alibaba to launch AI services in China faces regulatory hurdles due to escalating trade war between the US and China, potentially impacting iPhone sales in a key market.
7 Sources
Business and Economy
19 hrs ago
7 Sources
Business and Economy
19 hrs ago
OpenAI and Anthropic are competing to develop advanced AI coding tools, with OpenAI's Codex now available to ChatGPT Plus users and Anthropic's Claude aiming to be the world's best coding model.
2 Sources
Technology
19 hrs ago
2 Sources
Technology
19 hrs ago