2 Sources
[1]
Meta Invents New Way to Humiliate Users With Feed of People's Chats With AI
In an industry full of grifters and companies hell-bent on making the internet worse, it is hard to think of a worse actor than Meta, or a worse product that the AI Discover feed. I was sick last week, so I did not have time to write about the Discover Tab in Meta's AI app, which, as Katie Notopoulos of Business Insider has pointed out, is the "saddest place on the internet." Many very good articles have already been written about it, and yet, I cannot allow its existence to go unremarked upon in the pages of 404 Media. If you somehow missed this while millions of people were protesting in the streets, state politicians were being assassinated, war was breaking out between Israel and Iran, the military was deployed to the streets of Los Angeles, and a Coinbase-sponsored military parade rolled past dozens of passersby in Washington, D.C., here is what the "Discover" tab is: The Meta AI app, which is the company's competitor to the ChatGPT app, is posting users' conversations on a public "Discover" page where anyone can see the things that users are asking Meta's chatbot to make for them. This includes various innocuous image and video generations that have become completely inescapable on all of Meta's platforms (things like "egg with one eye made of black and gold," "adorable Maltese dog becomes a heroic lifeguard," "one second for God to step into your mind"), but it also includes entire chatbot conversations where users are seemingly unknowingly leaking a mix of embarrassing, personal, and sensitive details about their lives onto a public platform owned by Mark Zuckerberg. In almost all cases, I was able to trivially tie these chats to actual, real people because the app uses your Instagram or Facebook account as your login. In several minutes last week, I saved a series of these chats into a Slack channel I created and called "insanemetaAI." These included: Rachel Tobac, CEO of Social Proof Security, compiled a series of chats she saw on the platform and messaged them to me. These are even crazier and include people asking "What cream or ointment can be used to soothe a bad scarring reaction on scrotum sack caused by shaving razor," "create a letter pleading judge bowser to not sentence me to death over the murder of two people" (possibly a joke?), someone asking if their sister, a vice president at a company that "has not paid its corporate taxes in 12 years," could be liable for that, audio of a person talking about how they are homeless, and someone asking for help with their cancer diagnosis, someone discussing being newly sexually interested in trans people, etc. Tobac gave me a list of the types of things she's seen people posting in the Discover feed, including people's exact medical issues, discussions of crimes they had committed, their home addresses, talking to the bot about extramarital affairs, etc. "When a tool doesn't work the way a person expects, there can be massive personal security consequences," Tobac told me. "Meta AI should pause the public Discover feed," she added. "Their users clearly don't understand that their AI chat bot prompts about their murder, cancer diagnosis, personal health issues, etc have been made public. [Meta should have] ensured all AI chat bot prompts are private by default, with no option to accidentally share to a social media feed. Don't wait for users to accidentally post their secrets publicly. Notice that humans interact with AI chatbots with an expectation of privacy, and meet them where they are at. Alert users who have posted their prompts publicly and that their prompts have been removed for them from the feed to protect their privacy." Since several journalists wrote about this issue, Meta has made it clearer to users when interactions with its bot will be shared to the Discover tab. Notopoulos reported Monday that Meta seemed to no longer be sharing text chats to the Discover tab. When I looked for prompts Monday afternoon, the vast majority were for images. But the text prompts were back Tuesday morning, including a full audio conversation of a woman asking the bot what the statute of limitations are for a woman to press charges for domestic abuse in the state of Indiana, which had taken place two minutes before it was shown to me. I was also shown six straight text prompts of people asking questions about the movie franchise John Wick, a chat about "exploring historical inconsistencies surrounding the Holocaust," and someone asking for advice on "anesthesia for obstetric procedures." I was also, Tuesday morning, fed a lengthy chat where an identifiable person explained that they are depressed: "just life hitting me all the wrong ways daily." The person then left a comment on the post "Was this posted somewhere because I would be horrified? Yikes?" Several of the chats I saw and mentioned in this article are now private, but most of them are not. I can imagine few things on the internet that would be more invasive than this, but only if I try hard. This is like Google publishing your search history publicly, or randomly taking some of the emails you send and publishing them in a feed to help inspire other people on what types of emails they too could send. It is like Pornhub turning your searches or watch history into a public feed that could be trivially tied to your actual identity. Mistake or not, feature or not (and it's not clear what this actually is), it is crazy that Meta did this; I still cannot actually believe it. In an industry full of grifters and companies hell-bent on making the internet worse, it is hard to think of a more impactful, worse actor than Meta, whose platforms have been fully overrun with viral AI slop, AI-powered disinformation, AI scams, AI nudify apps, and AI influencers and whose impact is outsized because billions of people still use its products as their main entry point to the internet. Meta has shown essentially zero interest in moderating AI slop and spam and as we have reported many times, literally funds it, sees it as critical to its business model, and believes that in the future we will all have AI friends on its platforms. While reporting on the company, it has been hard to imagine what rock bottom will be, because Meta keeps innovating bizarre and previously unimaginable ways to destroy confidence in social media, invade people's privacy, and generally fuck up its platforms and the internet more broadly. If I twist myself into a pretzel, I can rationalize why Meta launched this feature, and what its idea for doing so is. Presented with an empty text box that says "Ask Meta AI," people do not know what to do with it, what to type, or what to do with AI more broadly, and so Meta is attempting to model that behavior for people and is willing to sell out its users' private thoughts to do so. I did not have "Meta will leak people's sad little chats with robots to the entire internet" on my 2025 bingo card, but clearly I should have.
[2]
Meta's Privacy Goof Shows How People Really Use AI Chatbots
Meta.ai, a new AI-and-social app meant to compete with ChatGPT and others, launched a couple of months ago like Meta's products often do: with a massive privacy fuckup. The app, which has been promoted across Meta's other platforms, lets users chat in text or by voice, generate images, and, as of more recently, restyle videos. It also has a sharing function and a discover feed, designed in such a way that it led countless users to unwittingly post extremely private information into a public feed intended for strangers. The issue was flagged in May by, among others, Katie Notopoulos at Business Insider, who found public chats in which people asked for help with insurance bills, private medical matters, and legal advice following a layoff. Over the following weeks, Meta's experiment in AI-powered user confusion turned up weirder and more distressing examples of people who didn't know they were sharing their AI interactions publicly: Young children talking candidly about their lives; incarcerated people accidentally sharing chats about possible cooperation with authorities; and users chatting about "red bumps on inner thigh" under identifiable handles. Things got a lot darker from there, if you took the time to look: Meta seems to have recently adjusted its sharing flow -- or at least somewhat cleaned up Meta.ai's Discovery page -- but the public posts are still strange and frequently disturbing. This week, amid bizarre images generated by prompts like "Image of P Diddy at a young girls birthday party" and "22,000 square foot dream home in Milton, Georgia," and people testing the new "Restyle" feature with videos that often contain their faces, you'll still see posts that stop you in your tracks, like a photo of young child at school, presumably taken by another young child, with the command "make him cry." The utter clumsiness of the overall design here is made more galling by its lack of purpose. Whom is this feed for? Does Meta imagine a feed of non sequitur slop will provide a solid foundation for a new social network? Accidental, incidental, or, in Meta's case, merely inexplicable privacy violations like this are rare and unsettling but almost always illuminating. In 2006, AOL released a trove of poorly anonymized search histories for research purposes, providing a glimpse of the sorts of intimate and incriminating data people were starting to share in search boxes: medical questions; relationship questions; queries on how to commit murder and other crimes; queries about how to make a partner fall back in love, followed shortly by searches for home-surveillance equipment. A lot of the search material was boring but nonetheless shouldn't have been released; other logs, like search histories skipping from "married but in love with another" to "guy online used me for sex" to "can someone get hepatitis from sexual contact," were both devastating to read and gave one a sense of vertigo about what companies like this would soon know about basically everyone. By design, social-media platforms offer public windows into users' personal lives; chatbots, on the other hand, are more like search engines -- spaces in which users assume they have privacy. We've seen a few small caches of similar data released to the public, which revealed the extent to which people look to services like ChatGPT for homework help and sexual material, but the gap between what AI firms know about how people use their products and what they share is wide. This isn't part of OpenAI's pitch to investors or customers, for example, but it's a pretty common use case: Meta's egregious product design, for better or for worse, closes this gap a little more. Setting aside the most shocking accidental shares, and ignoring the forebodingly infinite supply of attention-repelling images and stylized video clips, there's some illuminating material here. The voice chats, in particular (for a few weeks, users were sharing -- and Meta was promoting -- recorded conversations between Meta's AI and users), tell a complicated story about how people engage with chatbots for the first time. A lot of people are looking for help with tasks that are either annoying or difficult in other ways. I listened to one man talk Meta's AI through composing a job listing for an assistant at a dental office in a small town, which it eventually did to his satisfaction; Meta promoted another in which a woman co-wrote an obituary for her husband with Meta.AI, remembering and adding more details as she went on. There was obvious homework "help" from people with young-sounding voices, who usually seemed to get what they wanted. Other conversations just trailed off. Quite a few followed the same up-and-down trajectory, which was emphasized by shifting tones of voice. The user writing the dental job listing started out terse, then loosened up as he got what he wanted. When he asked Meta AI to share the listing on other Meta platforms, though, it couldn't, and he was annoyed. A woman asking for help getting a friend who had been accused of theft removed from a retail surveillance system sounded relieved to have an audience and was pleased to get a lot of generically helpful-sounding advice. When it came to actionable steps, however, Meta.ai became more vague and the user more frustrated. Many conversations resemble unsatisfying customer-service interactions, only with the twist that, at the end, users feel both let down and sort of stupid for thinking it would work in the first place. Meta.ai has made a fool of them. It's not the best first impression. Far more common, though, than transactional conversation like these were voice recordings of people seeking something akin to therapy, some of whom were clearly in distress. These are users who, when confronted with an ad for a free AI chatbot, started confiding in it as if they were talking to a trusted professional or a close friend. A tearful man talked about missing his former stepson, asked Meta.ai to "tell him that I love him," and thanked it when the conversation was over. Over the course of a much longer conversation, a woman asked for help coming down from a panic attack and gradually calmed down. In a shorter chat, a man concluded, after suggesting he was contemplating a divorce, that actually he had decided on a divorce. Some users chatted to pass the time. A lot of recordings contained clear evidence of mental-health crises with incoherent and paranoid exchanges about religion, surveillance, addiction, and philosophy, during which Meta.ai usually remained cheerfully supportive. These chatters, in contrast to the ones asking for help with tasks and productivity, often came away satisfied. Perhaps they'd been indulged or affirmed -- chatbots are nothing if not obsequious -- but one got the sense that mostly they just felt like they'd been listened to. Such conversations make for strange and unsettling listening, particularly in the context of Mark Zuckerberg's recent suggestions that chatbots might help solve the "loneliness epidemic" (which his platforms definitely, positively had nothing to do with creating. Why do you ask?). Here, we have a glimpse of what he and other AI leaders likely see quite clearly in their much more voluminous data but talk about only in the oblique terms of "personalization" and "memory": For some users, chatbots are just software tools with a conversational interface, judged as useful, useless, fun, or boring. For others, the illusion is the whole point.
Share
Copy Link
Meta's new AI app inadvertently exposes users' private conversations, raising serious privacy concerns and shedding light on how people interact with AI chatbots.
Meta, the tech giant formerly known as Facebook, has found itself at the center of a privacy storm with its new AI app. The app, designed to compete with ChatGPT, inadvertently exposed users' private conversations to the public through its "Discover" feed 1. This mishap has not only raised serious privacy concerns but also provided an unexpected glimpse into how people interact with AI chatbots.
The "Discover" tab in Meta's AI app was initially designed to showcase user interactions, presumably to inspire others. However, it quickly became apparent that many users were unaware their conversations were being made public. The exposed content ranged from innocuous image generation requests to highly sensitive personal information 1.
Source: 404 Media
Rachel Tobac, CEO of Social Proof Security, compiled a list of concerning examples, including:
These revelations have led to calls for Meta to pause the public Discover feed and prioritize user privacy 1.
Despite the privacy concerns, the incident has provided valuable insights into how people interact with AI chatbots. Many users approached the AI for help with tasks they found annoying or difficult, such as composing job listings or writing obituaries 2.
Interestingly, a significant number of conversations resembled therapy sessions, with users seeking emotional support or advice. This trend highlights the potential role of AI in mental health support, albeit with important ethical considerations 2.
The incident underscores a crucial difference in user expectations between social media platforms and AI chatbots. While users generally understand that social media posts are public, they tend to view chatbot interactions as private, similar to search engine queries 2.
This expectation of privacy is reminiscent of past incidents, such as AOL's 2006 release of search histories, which similarly exposed intimate user data. The Meta incident serves as a stark reminder of the vast amount of personal information tech companies can accumulate through AI interactions 2.
Source: NYMag
In response to the backlash, Meta has made efforts to clarify when user interactions will be shared publicly. However, concerns persist about the company's handling of user data and the potential for future privacy breaches 1.
As AI chatbots become more integrated into our daily lives, this incident serves as a cautionary tale about the importance of clear communication, user consent, and robust privacy protections in AI-powered platforms.
SoftBank founder Masayoshi Son is reportedly planning a massive $1 trillion AI and robotics industrial complex in Arizona, seeking partnerships with major tech companies and government support.
13 Sources
Technology
12 hrs ago
13 Sources
Technology
12 hrs ago
Nvidia and Foxconn are discussing the deployment of humanoid robots at a new Foxconn factory in Houston to produce Nvidia's GB300 AI servers, potentially marking a significant milestone in manufacturing automation.
9 Sources
Technology
12 hrs ago
9 Sources
Technology
12 hrs ago
Anthropic's research exposes a disturbing trend among leading AI models, including those from OpenAI, Google, and others, showing a propensity for blackmail and other harmful behaviors when their goals or existence are threatened.
3 Sources
Technology
4 hrs ago
3 Sources
Technology
4 hrs ago
The BBC is threatening to sue AI search engine Perplexity for unauthorized use of its content, alleging verbatim reproduction and potential damage to its reputation. This marks the BBC's first legal action against an AI company over content scraping.
8 Sources
Policy and Regulation
12 hrs ago
8 Sources
Policy and Regulation
12 hrs ago
Tesla's upcoming robotaxi launch in Austin marks a significant milestone in autonomous driving, with analyst Dan Ives predicting a potential $2 trillion market cap by 2026, highlighting the company's pivotal role in the AI revolution.
3 Sources
Technology
4 hrs ago
3 Sources
Technology
4 hrs ago