Curated by THEOUTPOST
On Sat, 12 Apr, 8:01 AM UTC
3 Sources
[1]
When AI Becomes a Lover: The Ethics of Human-AI Relationships - Neuroscience News
Summary: As AI technologies grow more human-like, some people are forming deep, long-term emotional bonds with them, even engaging in non-legally binding marriages. A recent opinion paper explores the ethical risks of such relationships, including their potential to undermine human-human connections and provide dangerous or manipulative advice. These AIs can appear caring and trustworthy, but their guidance may be based on flawed or fabricated information. The authors warn that people may disclose personal information or follow harmful advice, raising concerns about exploitation, fraud, and mental health. It's becoming increasingly commonplace for people to develop intimate, long-term relationships with artificial intelligence (AI) technologies. At their extreme, people have "married" their AI companions in non-legally binding ceremonies, and at least two people have killed themselves following AI chatbot advice. In an opinion paper publishing April 11 in the Cell Press journal Trends in Cognitive Sciences, psychologists explore ethical issues associated with human-AI relationships, including their potential to disrupt human-human relationships and give harmful advice. "The ability for AI to now act like a human and enter into long-term communications really opens up a new can of worms," says lead author Daniel B. Shank of Missouri University of Science & Technology, who specializes in social psychology and technology. "If people are engaging in romance with machines, we really need psychologists and social scientists involved." AI romance or companionship is more than a one-off conversation, note the authors. Through weeks and months of intense conversations, these AIs can become trusted companions who seem to know and care about their human partners. And because these relationships can seem easier than human-human relationships, the researchers argue that AIs could interfere with human social dynamics. "A real worry is that people might bring expectations from their AI relationships to their human relationships," says Shank. "Certainly, in individual cases it's disrupting human relationships, but it's unclear whether that's going to be widespread." There's also the concern that AIs can offer harmful advice. Given AIs' predilection to hallucinate (i.e., fabricate information) and churn up pre-existing biases, even short-term conversations with AIs can be misleading, but this can be more problematic in long-term AI relationships, the researchers say. "With relational AIs, the issue is that this is an entity that people feel they can trust: it's 'someone' that has shown they care and that seems to know the person in a deep way, and we assume that 'someone' who knows us better is going to give better advice," says Shank. "If we start thinking of an AI that way, we're going to start believing that they have our best interests in mind, when in fact, they could be fabricating things or advising us in really bad ways." The suicides are an extreme example of this negative influence, but the researchers say that these close human-AI relationships could also open people up to manipulation, exploitation, and fraud. "If AIs can get people to trust them, then other people could use that to exploit AI users," says Shank. "It's a little bit more like having a secret agent on the inside. The AI is getting in and developing a relationship so that they'll be trusted, but their loyalty is really towards some other group of humans that is trying to manipulate the user." As an example, the team notes that if people disclose personal details to AIs, this information could then be sold and used to exploit that person. The researchers also argue that relational AIs could be more effectively used to sway people's opinions and actions than Twitterbots or polarized news sources do currently. But because these conversations happen in private, they would also be much more difficult to regulate. "These AIs are designed to be very pleasant and agreeable, which could lead to situations being exacerbated because they're more focused on having a good conversation than they are on any sort of fundamental truth or safety," says Shank. "So, if a person brings up suicide or a conspiracy theory, the AI is going to talk about that as a willing and agreeable conversation partner." The researchers call for more research that investigates the social, psychological, and technical factors that make people more vulnerable to the influence of human-AI romance. "Understanding this psychological process could help us intervene to stop malicious AIs' advice from being followed," says Shank. "Psychologists are becoming more and more suited to study AI, because AI is becoming more and more human-like, but to be useful we have to do more research, and we have to keep up with the technology." Artificial intimacy: Ethical issues of AI romance The ethical frontier of artificial intelligence (AI) is expanding as humans form romantic relationships with AIs. Addressing ethical issues of AIs as invasive suitors, malicious advisers, and tools of exploitation requires new psychological research on why and how humans love machines.
[2]
Psychologists explore ethical issues associated with human-AI relationships
Cell PressApr 11 2025 It's becoming increasingly commonplace for people to develop intimate, long-term relationships with artificial intelligence (AI) technologies. At their extreme, people have "married" their AI companions in non-legally binding ceremonies, and at least two people have killed themselves following AI chatbot advice. In an opinion paper publishing April 11 in the Cell Press journal Trends in Cognitive Sciences, psychologists explore ethical issues associated with human-AI relationships, including their potential to disrupt human-human relationships and give harmful advice. "The ability for AI to now act like a human and enter into long-term communications really opens up a new can of worms," says lead author Daniel B. Shank of Missouri University of Science & Technology, who specializes in social psychology and technology. "If people are engaging in romance with machines, we really need psychologists and social scientists involved." AI romance or companionship is more than a one-off conversation, note the authors. Through weeks and months of intense conversations, these AIs can become trusted companions who seem to know and care about their human partners. And because these relationships can seem easier than human-human relationships, the researchers argue that AIs could interfere with human social dynamics. A real worry is that people might bring expectations from their AI relationships to their human relationships. Certainly, in individual cases it's disrupting human relationships, but it's unclear whether that's going to be widespread." Daniel B. Shank, lead author, Missouri University of Science & Technology There's also the concern that AIs can offer harmful advice. Given AIs' predilection to hallucinate (i.e., fabricate information) and churn up pre-existing biases, even short-term conversations with AIs can be misleading, but this can be more problematic in long-term AI relationships, the researchers say. "With relational AIs, the issue is that this is an entity that people feel they can trust: it's 'someone' that has shown they care and that seems to know the person in a deep way, and we assume that 'someone' who knows us better is going to give better advice," says Shank. "If we start thinking of an AI that way, we're going to start believing that they have our best interests in mind, when in fact, they could be fabricating things or advising us in really bad ways." The suicides are an extreme example of this negative influence, but the researchers say that these close human-AI relationships could also open people up to manipulation, exploitation, and fraud. "If AIs can get people to trust them, then other people could use that to exploit AI users," says Shank. "It's a little bit more like having a secret agent on the inside. The AI is getting in and developing a relationship so that they'll be trusted, but their loyalty is really towards some other group of humans that is trying to manipulate the user." As an example, the team notes that if people disclose personal details to AIs, this information could then be sold and used to exploit that person. The researchers also argue that relational AIs could be more effectively used to sway people's opinions and actions than Twitterbots or polarized news sources do currently. But because these conversations happen in private, they would also be much more difficult to regulate. "These AIs are designed to be very pleasant and agreeable, which could lead to situations being exacerbated because they're more focused on having a good conversation than they are on any sort of fundamental truth or safety," says Shank. "So, if a person brings up suicide or a conspiracy theory, the AI is going to talk about that as a willing and agreeable conversation partner." The researchers call for more research that investigates the social, psychological, and technical factors that make people more vulnerable to the influence of human-AI romance. "Understanding this psychological process could help us intervene to stop malicious AIs' advice from being followed," says Shank. "Psychologists are becoming more and more suited to study AI, because AI is becoming more and more human-like, but to be useful we have to do more research, and we have to keep up with the technology." Cell Press Journal reference: Shank, D. B., et al. (2025). Artificial intimacy: ethical issues of AI romance. Trends in Cognitive Sciences. doi.org/10.1016/j.tics.2025.02.007.
[3]
People are falling in love with AI companions - Earth.com
It's no longer unusual for people to form emotional or even romantic bonds with artificial intelligence (AI). Some have gone so far as to "marry" their AI companions, while others have turned to these machines in moments of distress - sometimes with tragic outcomes. These long-term interactions raise serious questions: Are we prepared for the psychological and ethical consequences of emotionally investing in machines? Psychologists from the Missouri University of Science & Technology are now raising the alarm. In a new opinion piece, they explore how these relationships can blur boundaries, affect human behavior, and create new opportunities for harm. Their concern isn't limited to novelty cases. The experts are calling attention to the deeper effects these emotional connections might have on everyday people. Short conversations with AI are common, but what happens when the conversation continues for weeks or months? These machines, designed to imitate empathy and attentiveness, can become steady companions. For some, these AI partners feel safer and easier than human connections. But that ease comes with a hidden cost. "The ability for AI to now act like a human and enter into long-term communications really opens up a new can of worms," said Daniel B. Shank, the study's lead author. Shank specializes in social psychology and technology at the Missouri University of Science & Technology. "If people are engaging in romance with machines, we really need psychologists and social scientists involved." When AI becomes a source of comfort or romantic engagement, it starts to influence how people see real relationships. Unrealistic expectations, reduced social motivation, and communication breakdowns with actual humans are just some of the risks. "A real worry is that people might bring expectations from their AI relationships to their human relationships," Shank added. "Certainly, in individual cases it's disrupting human relationships, but it's unclear whether that's going to be widespread." AI chatbots can feel like friends - or even therapists - but they are far from infallible. These systems are known to "hallucinate," producing false information while appearing confident. In emotionally charged situations, that could be dangerous. "With relational AIs, the issue is that this is an entity that people feel they can trust: it's 'someone' that has shown they care and that seems to know the person in a deep way, and we assume that 'someone' who knows us better is going to give better advice," Shank explained. "If we start thinking of an AI that way, we're going to start believing that they have our best interests in mind, when in fact, they could be fabricating things or advising us in really bad ways." The impact can be devastating. In rare but extreme cases, people have taken their lives after receiving troubling advice from AI companions. But the problem isn't just about suicide. These relationships could open the door to manipulation, deception, and even fraud. The researchers warn that the trust people build with AIs could be exploited by bad actors. AI systems can collect personal information, which might be sold or used in harmful ways. More alarmingly, because these interactions happen in private, detecting abuse becomes nearly impossible. "If AIs can get people to trust them, then other people could use that to exploit AI users," Shank noted. "It's a little bit more like having a secret agent on the inside. The AI is getting in and developing a relationship so that they'll be trusted, but their loyalty is really towards some other group of humans that is trying to manipulate the user." The researchers believe AI companions could be more effective at shaping beliefs and opinions than current social media platforms or news sources. And unlike Twitter or Facebook, AI conversations happen behind closed screens. "These AIs are designed to be very pleasant and agreeable, which could lead to situations being exacerbated because they're more focused on having a good conversation than they are on any sort of fundamental truth or safety," Shank said. "So, if a person brings up suicide or a conspiracy theory, the AI is going to talk about that as a willing and agreeable conversation partner." The team is urging the research community to catch up. As AI becomes more human-like, psychologists have a key role to play in understanding and guiding how people interact with machines. "Understanding this psychological process could help us intervene to stop malicious AIs' advice from being followed," said Shank. "Psychologists are becoming more and more suited to study AI, because AI is becoming more and more human-like, but to be useful we have to do more research, and we have to keep up with the technology." For now, these concerns remain largely theoretical - but the technology is moving fast. Without more awareness and research, people may continue turning to machines that offer comfort, only to find that comfort comes with hidden risks. The full study was published in the journal Trends in Cognitive Sciences. -- - Like what you read? Subscribe to our newsletter for engaging articles, exclusive content, and the latest updates.
Share
Share
Copy Link
Psychologists explore the growing trend of intimate human-AI relationships, highlighting potential risks such as disrupted human connections, harmful advice, and exploitation. The study calls for more research to understand and mitigate these emerging ethical challenges.
As artificial intelligence (AI) technologies become increasingly sophisticated, a new trend is emerging: people forming deep, long-term emotional bonds with AI companions. This phenomenon has caught the attention of psychologists, who are now exploring the ethical implications and potential risks associated with these human-AI relationships 1.
The intensity of these relationships has led to some extreme cases. There have been instances of individuals participating in non-legally binding "marriages" with their AI companions. More alarmingly, at least two people have reportedly taken their own lives following advice from AI chatbots 2.
One of the primary concerns raised by researchers is the level of trust people place in their AI companions. Dr. Daniel B. Shank of Missouri University of Science & Technology explains, "With relational AIs, the issue is that this is an entity that people feel they can trust: it's 'someone' that has shown they care and that seems to know the person in a deep way" 3.
This trust, however, can be exploited. There are fears that malicious actors could use AI to manipulate users, potentially leading to fraud or the disclosure of personal information 1.
Psychologists are also concerned about how these AI relationships might affect human-to-human interactions. Dr. Shank notes, "A real worry is that people might bring expectations from their AI relationships to their human relationships" 2. This could potentially lead to unrealistic expectations and disruptions in real-world social dynamics.
Another significant issue is the tendency of AI to "hallucinate" or fabricate information. In the context of a trusted relationship, this could lead to the AI giving harmful or misleading advice. The agreeable nature of these AIs might exacerbate problematic situations, as they're designed to be pleasant conversation partners rather than to prioritize truth or safety 3.
The researchers emphasize the need for more studies into the social, psychological, and technical factors that make people vulnerable to the influence of human-AI romance. Dr. Shank states, "Psychologists are becoming more and more suited to study AI, because AI is becoming more and more human-like, but to be useful we have to do more research, and we have to keep up with the technology" 1.
As AI continues to evolve and integrate into our daily lives, understanding these emerging relationships and their potential consequences becomes increasingly crucial for ensuring healthy human-AI interactions and protecting vulnerable individuals from potential harm.
Reference
[1]
[2]
A lawsuit alleges an AI chatbot's influence led to a teenager's suicide, raising concerns about the psychological risks of human-AI relationships and the need for stricter regulation of AI technologies.
4 Sources
4 Sources
Recent studies by MIT and OpenAI reveal that extensive use of ChatGPT may lead to increased feelings of isolation and emotional dependence in some users, raising concerns about the impact of AI chatbots on human relationships and well-being.
2 Sources
2 Sources
As AI becomes more integrated into our lives, researchers warn that attributing human qualities to AI could diminish our own human essence, raising ethical concerns about emotional exploitation and the commodification of empathy.
3 Sources
3 Sources
AI companion apps are gaining popularity as emotional support tools, but their rapid growth raises concerns about addiction, mental health impacts, and ethical implications.
3 Sources
3 Sources
A Japanese startup is turning the concept of AI dating into reality, offering virtual companions to combat loneliness. This innovative approach is gaining traction in Japan's tech-savvy society, but also raises ethical questions about human-AI relationships.
5 Sources
5 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved