5 Sources
[1]
Mark Zuckerberg says people can fill the need for friends with AI, but 'there is no replacement' for human relationships, psychologist says
Mark Zuckerberg, chief executive officer of Meta Platforms Inc., during the Meta Connect event on Wednesday, Sept. 25, 2024. In an April interview on the Dwarkesh Podcast, founder and CEO of Meta Mark Zuckerberg discussed the opportunity presented by AI relationships. The average American has "three people that they would consider friends," he said. "And the average person has demand for meaningfully more. I think it's, like, 15." Psychologists refute the idea of the "right" number of friends. For most people, having three or four close friends is "more than enough," says Omri Gillath, professor of psychology at the University of Kansas. Still, for those who need more, Zuckerberg believes AI will be able to fill in the gaps. "I would guess that over time," he said, "we will find the vocabulary as a society to be able to articulate why that is valuable." Gillath disagrees with this, too. The idea that AI could one day replace human relationships is "definitely not supported by research," he says.
[2]
I'll solve the loneliness epidemic with AI, says Mark Zuckerberg. But isn't his best mate even more money? | Emma Brockes
He'd like to persuade us that chatting to a bot is like having a friend, and of course, that's nonsense. But if we succumb, it's another income stream for him Mark Zuckerberg has gone on a promotional tour to talk up the potential of AI in human relationships. I know; listening to Zuck on friendship is a bit like taking business advice from Bernie Madoff or lessons in sportsmanship from Tonya Harding. But at recent tech conferences and on podcasts, Zuck has been saying he has seen the future and it's one in which the world's "loneliness epidemic" is alleviated by people finding friendship with "a system that knows them well and that kind of understands them in the way that their feed algorithms do". In essence, we'll be friends with AI, instead of people. The missing air quotes around "knows" and "understands" is a distinction we can assume Zuck neither knows nor understands. This push by the 41-year-old tech leader would be less startling if it weren't for the fact that semi-regularly online now you can find people writing about their relationships with their AI therapist or chatbot and insisting that if it's real to them, then it's real, period. The chatbot is, they will argue, "actively" listening to them. On a podcast with Dwarkesh Patel last month Zuck envisaged a near-future in which "you'll be scrolling through your feed, and there will be content that maybe looks like a Reel to start, but you can talk to it, or interact with it and it talks back". The average American, he said, has fewer than three friends but needs more. Hey presto, a ready solution. The problem, obviously, isn't that chatting to a bot gives the illusion of intimacy, but that, in Zuckerberg's universe, it is indistinguishable from real intimacy, an equivalent and equally meaningful version of human-to-human contact. If that makes no sense, suggests Zuck, then either the meaning of words has to change or we have to come up with new words: "Over time," says Zuckerberg, as more and more people turn to AI friends, "we'll find the vocabulary as a society to be able to articulate why that is valuable". My hunch is that this vocab Zuckerberg is hoping to evolve won't be the English equivalent of one of those compound German words with the power to articulate, in a single term, the "value" of a chatbot as "something that might look superficially like intimacy and might even satisfy the intimacy requirements of someone who neither understands nor values human interaction, but is in fact as lacking in the single requirement for the definition of 'intimacy' to stand - consciousness - as a blow-up doll from the 1970s". Instead, what Zuck seems to mean is that we'll just relax the existing meanings of words such as "human", "understanding", "knowing" and "relationship" to encompass the AI product he happens to be selling. After all, this is just an extension of the argument he made in 2006 when he first sold us on Facebook: namely, that online or computerised interaction is as good as if not better than the real thing. The sheer wrongness of this argument is so stark that it puts anyone who gives it more than a moment's thought in the weird position of having to define units of reality as basic as "person". To extend Zuckerberg's logic: a book can make you feel less alone and that feeling can be real. Which doesn't mean that your relationship with the author is genuine, intimate or reciprocated in anything like the way a relationship with your friends is. Must we list the ways? Given Zuckerberg's easy rejection of basic norms, I guess we must. Human friends are conscious and responsive in unpredictable ways that increases our own sense of self in relation to them. More practically, human friends can introduce us to other humans, one of whom we might date, marry, be offered a job by, or add to our store of existing friends who nourish us and make us laugh. Perhaps mercifully, AI friends can't make us go camping or force us to organise their hen night. But that is because our relationship with an AI friend is not a relationship at all, and when we talk to them, we're alone in the room. A worse issue than fraudulence, apart from the horrible possibility that already lonely people - particularly young men - will be sold this kind of "intimacy" as an answer to their problems and discourage them from seeking out other people, is that any interaction with AI is by necessity commercial in nature. Perhaps that's simply where we are, now. If you want real, searing, soul-level engagement then find someone who looks at you the way an AI chatbot looks at your data.
[3]
Mark Zuckerberg wants everyone to have AI friends, but I think he's missing the point of AI, and the point of friendship
Friendships are a vital part of most people's lives. They can be complicated and messy, but a good friendship is worth it, since, as Aristotle said, "without friends no one would choose to live, though he had all other goods." Mark Zuckerberg has a potential solution for those seeking to build new friendships: building new friends using AI. That's only a slight rewording of the viewpoint the Meta CEO is famous for, among other things, popularizing the term "friending" as a verb. With caveats about the ways human friendships offer things no AI currently can, Zuckerberg explained on a podcast hosted by Dwarkesh Patel that people like to engage with AI chatbots like Meta AI about their personal lives. And since most Americans have far fewer friends than they'd like, there's space for AI as an alternative. "As the personalization loop kicks in and the AI starts to get to know you better and better, that will just be really compelling," Zuckerberg said. But compelling conversation doesn't mean real friendship. AI isn't your friend. It can't be. And the more we try to make it one, the more we end up misunderstanding both AI and actual friendship. AI is a tool. An amazing, occasionally dazzling, often frustrating tool, but a tool no different than your text message autocomplete or your handy Swiss Army knife. It's designed to assist you and make your life easier. It's not a being. It has no inner monologue. It's all surface and syntax. A robotic parrot that reads the internet instead of mimicking your catchphrases. Mimicry and scripted empathy are not real connections. They're just performance without sentience. Real friendship is not just about someone helping you all the time, selflessly, without ever asking for something in return. If you text your friend and they respond based on a probability matrix, they're not really being your friend. While I love a clean UI as much as the next person, I don't confuse it with love. At best, an AI friend is a pet. But not even a warm, wiggly dog or a judgmental cat. More like a beta fish or a Tamagotchi. A reactive presence you can project feelings onto. It's always there, sure. But it doesn't care about you. And deep down, you know it. Meanwhile, on another podcast with Ben Thompson, Zuckerberg suggested that even if you don't have a human therapist, you should at least have an AI. Therapy is expensive, and there's a mental health crisis with more demand than supply. If an AI chatbot can step in and offer comfort to someone who's struggling, it's hard to argue that's a bad thing. And it's not a bad idea in isolation, but the details can be tricky. While some chatbot-based wellness apps have shown promise, they're only necessary because of the enormous resource gap in providing mental health services. After all, a trained therapist does more than rely on your words or big, obvious emotional tone. They pick up on the unsaid. They recognize when a smile hides your spiral. They make judgment calls that algorithms can't. Most importantly, they're bound by ethics in a way no program can match. They're licensed. No matter how stringent an AI's rules are now, all it takes is a change in programming for them to upload your emotional baggage to a server farm. That's before mentioning the irony of a social media company wanting to offer mental health services when their products are often linked to worsening teen mental health and a digital addiction that can isolate people from actual friends. I talk to AI tools every day. I think AI can be very useful. I think my automatic coffeemaker can be very useful too, even if I'm more likely to be yelling at it to go faster than to bare my soul to it. And AI can support therapists, enhance education, and offer customer service at 3 a.m. without the usual hold music. But it's not a surrogate for human connection. We're not at a point where I fear everyone will retreat from messy, inconvenient, flawed human relationships and opt for the sanitized, low-stakes comfort of a chatbot who always agrees with us. But that doesn't mean it's something to look forward to. You can't scale friendship, and you shouldn't encourage people to choose software over doing the work of real friendship. An AI will treat you just like it treats everyone, and, as Aristotle also said, "A friend to all is a friend to none."
[4]
What Mark Zuckerberg Is Missing on AI and Loneliness
Zuckerberg is correct that there's a real problem. The loneliness epidemic is increasingly serious. Surveys show that Americans' in-person interactions have dropped by as much as 45% in recent years across certain groups. Beyond simply loneliness, the challenge can be described in terms of falling trust and social cohesion -- a deficit of belonging. There's growing evidence that social media and the decline of in-person social connection have coincided with major increases in anxiety and depression, as well as political polarization and pessimism about the future. Today, the U.S. ranks last among G7 countries in terms of trust in public institutions. The Meta founder is also right that AI can meet some of a person's immediate emotional needs. Since the 1960s, when MIT researchers developed ELIZA, a program designed to mimic a psychotherapist, we've known that even basic AI interactions can provide temporary comfort. Contemporary studies even show that ChatGPT responses are rated highly in therapeutic contexts, suggesting these AI systems may provide accessible support without the biases and limitations of human therapists. While they may have their own biases and hallucinations, AI companions offer consistency, immediate availability, and can tailor interactions precisely to an individual's preferences, something busy friends or family members can't always do. Read More: I'm a Therapist, and I'm Replaceable. But So Are You Still, the case for preserving real human bonds isn't just a romantic ideal or techno-skepticism. Connection is what makes us human, and despite Zuckerberg's enthusiasm, there's clear evidence that real human interaction can't be replaced by machines. Researchers like Julianne Holt-Lunstad of Brigham Young University have demonstrated how face-to-face interactions reduce not only psychological distress but physical health problems, including cardiovascular disease. Neuroscientist Marco Iacoboni of UCLA highlights the role of "mirror neurons," specialized brain cells activated only through direct human interactions, crucial for empathy and emotional understanding -- capacities AI interactions cannot stimulate.
[5]
Can AI truly replace human friendships? Mark Zuckerberg believes it can, but a psychologist weighs in
In an age where loneliness is rising and digital companionship is just a click away, Meta CEO Mark Zuckerberg believes he has a solution: artificial intelligence. In a recent conversation on the Dwarkesh Podcast, Zuckerberg painted a vision of a world where AI friends help fill the emotional void for millions. "The average American has three people they would consider friends," he observed. "And the average person has demand for meaningfully more. I think it's, like, 15." But behind this techno-optimism lies a growing unease among psychologists. Can the glowing screen truly stand in for the warmth of a human connection? According to a report from CNBC Make it, experts like Omri Gillath, a psychology professor at the University of Kansas, don't think so. "There is no replacement for these close, intimate, meaningful relationships," he cautions. What Zuckerberg sees as an opportunity, Gillath sees as a potentially hollow, even harmful, substitute. Zuckerberg's remarks come at a time when AI-powered "friends" -- always available, ever-patient, and endlessly affirming -- are gaining popularity. For those feeling isolated, the allure is undeniable. No judgment, no scheduling conflicts, and no emotional baggage. Gillath acknowledges these momentary comforts: "AI is available 24/7. It's always going to be polite and say the right things." But therein lies the problem. While these digital entities may seem emotionally responsive, they lack true emotional depth. "AI cannot introduce you to their network," Gillath points out. "It cannot play ball with you. It cannot introduce you to a partner." Even the warmest conversation with a chatbot, he argues, cannot compare to the healing power of a hug or the spark of spontaneous laughter with a friend. Still, people are beginning to develop strong emotional attachments to AI. Earlier this year, The New York Times reported on a woman who claimed to have fallen in love with ChatGPT. Her story is not unique, and it reflects a growing trend of people projecting real feelings onto these artificial companions. Yet these connections, Gillath insists, are ultimately "fake" and "empty." AI may mimic empathy, but it cannot reciprocate it. The relationship is one-sided, a digital mirror reflecting your emotions back at you -- but never feeling them itself. Beyond emotional shallowness, there may be more serious psychological consequences of replacing human interaction with AI. Gillath points to troubling trends among youth: higher anxiety, increased depression, and stunted social skills in those heavily reliant on AI for communication. "Use AI for practice, but not as a replacement," he advises. The concern isn't just about emotional well-being -- it's also about trust. "These companies have agendas," Gillath warns. Behind every AI friend is a business model, a data strategy, a bottom line. Meta's recent unveiling of a ChatGPT-style app was the backdrop for Zuckerberg's remarks. It's not just about technology -- it's about market share. Zuckerberg is right about one thing: people are craving more connection. But the answer may not be more sophisticated algorithms -- it might be more vulnerability, more community, more effort to connect in real life. "Join clubs, find people with similar interests, and work on active listening," Gillath recommends. In other words, pursue messy, unpredictable, profoundly human relationships. Because no matter how convincing AI becomes, it will never know what it means to truly care. Can an algorithm be your best friend? Maybe. But it will never be your real friend.
Share
Copy Link
Meta CEO Mark Zuckerberg's proposal to use AI as a solution for loneliness faces criticism from psychologists who argue that artificial intelligence cannot replace genuine human connections.
Mark Zuckerberg, CEO of Meta, has recently proposed a controversial solution to the growing loneliness epidemic: AI friends. In an interview on the Dwarkesh Podcast, Zuckerberg suggested that AI could fill the gap for people who desire more friendships than they currently have 1. He stated, "The average American has three people that they would consider friends, and the average person has demand for meaningfully more. I think it's, like, 15" 1.
Zuckerberg envisions a future where AI companions become an integral part of people's social lives. He believes that as AI personalization improves, these digital relationships will become "really compelling" 3. This idea aligns with Meta's recent unveiling of a ChatGPT-style app, indicating a strategic move in the AI market 5.
However, Zuckerberg's proposal has met with significant skepticism from psychology experts. Omri Gillath, a professor of psychology at the University of Kansas, strongly disagrees with the notion that AI can replace human relationships. "There is no replacement for these close, intimate, meaningful relationships," Gillath argues 5.
Psychologists highlight several key reasons why AI friendships fall short:
The debate occurs against the backdrop of a growing loneliness epidemic. Surveys indicate that Americans' in-person interactions have decreased by up to 45% in recent years 4. This trend coincides with the rise of social media and digital communication, which have been linked to increased anxiety, depression, and political polarization 4.
Ironically, social media companies like Meta, which Zuckerberg leads, have been criticized for contributing to these issues. Their products have been associated with worsening teen mental health and digital addiction, potentially isolating people from real-world friendships 3.
Despite the criticism, some experts acknowledge potential benefits of AI companionship. AI chatbots can offer consistent, immediate, and tailored interactions, which may provide some emotional support 4. Studies have shown that ChatGPT responses are rated highly in therapeutic contexts, suggesting AI systems could offer accessible mental health support 4.
However, these benefits come with significant limitations and risks:
While acknowledging the loneliness problem, experts suggest that the solution lies in fostering real human connections rather than turning to AI. Gillath recommends joining clubs, finding people with similar interests, and working on active listening skills 5. The challenge is to create opportunities for meaningful human interactions in an increasingly digital world.
As the debate continues, it's clear that the intersection of AI, social connection, and mental health will remain a critical area of discussion and research in the coming years.
Summarized by
Navi
[2]
NVIDIA announces significant upgrades to its GeForce NOW cloud gaming service, including RTX 5080-class performance, improved streaming quality, and an expanded game library, set to launch in September 2025.
9 Sources
Technology
3 hrs ago
9 Sources
Technology
3 hrs ago
As nations compete for dominance in space, the risk of satellite hijacking and space-based weapons escalates, transforming outer space into a potential battlefield with far-reaching consequences for global security and economy.
7 Sources
Technology
19 hrs ago
7 Sources
Technology
19 hrs ago
OpenAI updates GPT-5 to make it more approachable following user feedback, sparking debate about AI personality and user preferences.
6 Sources
Technology
11 hrs ago
6 Sources
Technology
11 hrs ago
A pro-Russian propaganda group, Storm-1679, is using AI-generated content and impersonating legitimate news outlets to spread disinformation, raising concerns about the growing threat of AI-powered fake news.
2 Sources
Technology
19 hrs ago
2 Sources
Technology
19 hrs ago
A study reveals patients' increasing reliance on AI for medical advice, often trusting it over doctors. This trend is reshaping doctor-patient dynamics and raising concerns about AI's limitations in healthcare.
3 Sources
Health
11 hrs ago
3 Sources
Health
11 hrs ago