24 Sources
[1]
ChatGPT users shocked to learn their chats were in Google search results
Faced with mounting backlash, OpenAI removed a controversial ChatGPT feature that caused some users to unintentionally allow their private -- and highly personal -- chats to appear in search results. Fast Company exposed the privacy issue on Wednesday, reporting that thousands of ChatGPT conversations were found in Google search results and likely only represented a sample of chats "visible to millions." While the indexing did not include identifying information about the ChatGPT users, some of their chats did share personal details -- like highly specific descriptions of interpersonal relationships with friends and family members -- perhaps making it possible to identify them, Fast Company found. OpenAI's chief information security officer, Dane Stuckey, explained on X that all users whose chats were exposed opted in to indexing their chats by clicking a box after choosing to share a chat. Fast Company noted that users often share chats on WhatsApp or select the option to save a link to visit the chat later. But as Fast Company explained, users may have been misled into sharing chats due to how the text was formatted: "When users clicked 'Share,' they were presented with an option to tick a box labeled 'Make this chat discoverable.' Beneath that, in smaller, lighter text, was a caveat explaining that the chat could then appear in search engine results." At first, OpenAI defended the labeling as "sufficiently clear," Fast Company reported Thursday. But Stuckey confirmed that "ultimately," the AI company decided that the feature "introduced too many opportunities for folks to accidentally share things they didn't intend to." According to Fast Company, that included chats about their drug use, sex lives, mental health, and traumatic experiences. Carissa Veliz, an AI ethicist at the University of Oxford, told Fast Company she was "shocked" that Google was logging "these extremely sensitive conversations."
[2]
Your public ChatGPT queries are getting indexed by Google and other search engines | TechCrunch
It's a strange glimpse into the human mind: if you filter search results on Google, Bing, and other search engines to only include URLs from the domain "https://chatgpt.com/share," you can find strangers' conversations with ChatGPT. Sometimes, these shared conversation links are pretty dull -- people ask for help renovating their bathroom, understanding astrophysics, and finding recipe ideas. In another case, one user asks ChatGPT to rewrite their resume for a particular job application (judging by this person's LinkedIn, which was easy to find based on the details in the chat log, they did not get the job). Someone else is asking questions that sound like they came out of an incel forum. Another person asks the snarky, hostile AI assistant if they can microwave a metal fork (for the record: no), but they continue to ask the AI increasingly absurd and trollish questions, eventually leading it to create a guide called "How to Use a Microwave Without Summoning Satan: A Beginner's Guide." ChatGPT does not make these conversations public by default. A conversation would be appended with a "/share" URL only if the user deliberately clicks the "share" button on their own chat and then clicks a second "create link" button. The service also declares that "your name, custom instructions, and any messages you add after sharing stay private." However, users probably do not anticipate that Google will index their shared ChatGPT links, potentially betraying personal information (my apologies to the person whose LinkedIn I discovered). Though unintentional, this is a norm that was established in part by Google. When people share public links to files from Google Drive, such as documents with the "Anyone with link can view" setting, Google may index them in Search. However, Google generally does not surface links to Drive documents that have not been publicly posted on the web -- for example, a document may appear in search if it is linked on a trusted website. But this doesn't seem to be the case for these ChatGPT logs. OpenAI did not provide comment before publication. "Neither Google nor any other search engine controls what pages are made public on the web," a Google spokesperson told TechCrunch. "Publishers of these pages have full control over whether they are indexed by search engines."
[3]
Be Careful What You Tell ChatGPT: Your Chats Could Show Up on Google Search
Many chatbot users seem to be unintentionally making their private ChatGPT conversations publicly available in search results on sites like Google, Bing, and DuckDuckGo. You can view a trove of others' conversations by adding site:chatgpt.com/share to your search queries. When we perused the results, one person asked for help with a passive-aggressive partner. A recently discharged veteran said they were "lonely, stressed, and struggling," and asked about local support groups. Another person asked ChatGPT to confirm a conspiracy theory that Cape Cod, Massachusetts, is "the deep state" with "secret tunnels" and "giant generators." As Fast Company first reported, there are seemingly endless pages of personal, private chats, all of which have one thing in common. People "shared" them by pressing a button in ChatGPT that created a link to that specific conversation. This feature, which rolled out in May 2023, was pitched as a "new way for users to share their ChatGPT conversations, replacing the old and burdensome method of sharing screenshots." Those with shared links can also continue the conversation. Shared links do not include the person's name or account details. But if someone includes a specific phrase or name in the chat, it may be easily identifiable and searchable. According to OpenAI's Shared Links FAQ, conversations will not appear in public search results unless you "manually enable the link to be indexed by search engines when sharing." The pop-up has a small checkbox that says, "Make this chat discoverable," which people may think is required. In smaller print below the box, it reads, "Allows it to be shown in web searches." A Google spokesperson says OpenAI is responsible for publishing the results to search engines, and search engines are not initiating this behavior. If you're worried about links you've shared on ChatGPT becoming public, you can edit or delete them. If you delete your ChatGPT account, your shared links also go down with it. Of course, some people might not care if their conversations are searchable. Maybe they're using a throwaway account or looking to engage with other like-minded web users. However, if you want to keep your chats between you, ChatGPT, and a close inner circle, be careful how you share. "People talk about the most personal shit in their lives to ChatGPT," OpenAI CEO Sam Altman said in an interview last week, TechCrunch reports. "People use it -- young people, especially, use it -- as a therapist, a life coach; having these relationship problems and [asking] 'what should I do?'" The company is also legally required to retain and share your personal chats in the event of a lawsuit. In other words, they can and will be used against you in court. 'A Massive SEO Goldmine' One Redditor calls search engines' indexing of ChatGPT conversations a "goldmine" for businesses, specifically people who work in Search Engine Optimization (SEO) and are trying to understand the type of content their audience wants to read online. "These conversations reveal exactly what your audience struggles with," they write. "The questions they're too embarrassed to ask publicly. Think about it: We've spent years trying to understand search intent through keyword research and user interviews. Now we can literally see the conversations people are having with AI about our industry." Meta's AI ran into a similar issue in June, when people were shocked to find out that sharing conversations meant they would show up in others' feeds. In response, Meta updated its AI app to show a warning prompt before sharing any chats to the public Discover feed. Meanwhile, after Google renamed Bard to Gemini in February 2024, people realized Gemini chats were appearing in Google search results using the site:gemini.google.com/share parameter, Search Engine Journal reports. That query no longer returns any results. The connection between Google and ChatGPT may also run deeper than surfacing shared links. Others have noticed that ChatGPT's answers may pull directly from Google search results, and sometimes appear exactly the same as Google's AI overview. "Well, we all knew ChatGPT is also a wrapper for Wikipedia and Google search," one Redditor writes.
[4]
After Backlash, ChatGPT Removes Option to Have Private Chats Indexed by Google
UPDATE 8/1: Following publication of this story, OpenAI removed all chats from Google search results, along with the checkbox in ChatGPT that enabled users to make them discoverable. When you search site:chatgpt.com/share on Google, there are no longer any results. The chats are still appearing in other search engines, such as Bing and DuckDuckGo, as of this writing since the feature is still "rolling out to all users," says OpenAI CISO Dan Stuckey. (Here's one where someone fights with the chatbot, and asks it about the history of political corruption.) "Ultimately, we think this feature introduced too many opportunities for folks to accidentally share things they didn't intend to, so we're removing the option," Stuckey says. "We're also working to remove indexed content from the relevant search engines." An OpenAI spokesperson confirms to PCMag that it was an "experiment" that has now ended. Original Story 7/31: Many chatbot users seem to be unintentionally making their private ChatGPT conversations publicly available in search results on sites like Google, Bing, and DuckDuckGo. You can view a trove of others' conversations by adding site:chatgpt.com/share to your search queries. When we perused the results, one person asked for help with a passive-aggressive partner. A recently discharged veteran said they were "lonely, stressed, and struggling," and asked about local support groups. Another person asked ChatGPT to confirm a conspiracy theory that Cape Cod, Massachusetts, is "the deep state" with "secret tunnels" and "giant generators." As Fast Company first reported, there are seemingly endless pages of personal, private chats, all of which have one thing in common. People "shared" them by pressing a button in ChatGPT that created a link to that specific conversation. This feature, which rolled out in May 2023, was pitched as a "new way for users to share their ChatGPT conversations, replacing the old and burdensome method of sharing screenshots." Those with shared links can also continue the conversation. Shared links do not include the person's name or account details. But if someone includes a specific phrase or name in the chat, it may be easily identifiable and searchable. According to OpenAI's Shared Links FAQ, conversations will not appear in public search results unless you "manually enable the link to be indexed by search engines when sharing." The pop-up has a small checkbox that says, "Make this chat discoverable," which people may think is required. In smaller print below the box, it reads, "Allows it to be shown in web searches." A Google spokesperson says OpenAI is responsible for publishing the results to search engines, and search engines are not initiating this behavior. If you're worried about links you've shared on ChatGPT becoming public, you can edit or delete them. If you delete your ChatGPT account, your shared links also go down with it. Of course, some people might not care if their conversations are searchable. Maybe they're using a throwaway account or looking to engage with other like-minded web users. However, if you want to keep your chats between you, ChatGPT, and a close inner circle, be careful how you share. "People talk about the most personal shit in their lives to ChatGPT," OpenAI CEO Sam Altman said in an interview last week, TechCrunch reports. "People use it -- young people, especially, use it -- as a therapist, a life coach; having these relationship problems and [asking] 'what should I do?'" The company is also legally required to retain and share your personal chats in the event of a lawsuit. In other words, they can and will be used against you in court. 'A Massive SEO Goldmine' One Redditor calls search engines' indexing of ChatGPT conversations a "goldmine" for businesses, specifically people who work in Search Engine Optimization (SEO) and are trying to understand the type of content their audience wants to read online. "These conversations reveal exactly what your audience struggles with," they write. "The questions they're too embarrassed to ask publicly. Think about it: We've spent years trying to understand search intent through keyword research and user interviews. Now we can literally see the conversations people are having with AI about our industry." Meta's AI ran into a similar issue in June, when people were shocked to find out that sharing conversations meant they would show up in others' feeds. In response, Meta updated its AI app to show a warning prompt before sharing any chats to the public Discover feed. Meanwhile, after Google renamed Bard to Gemini in February 2024, people realized Gemini chats were appearing in Google search results using the site:gemini.google.com/share parameter, Search Engine Journal reports. That query no longer returns any results. The connection between Google and ChatGPT may also run deeper than surfacing shared links. Others have noticed that ChatGPT's answers may pull directly from Google search results, and sometimes appear exactly the same as Google's AI overview. "Well, we all knew ChatGPT is also a wrapper for Wikipedia and Google search," one Redditor writes.
[5]
OpenAI removes ChatGPT self-doxing option
Checkbox to make chatbot conversations appear in search engines deemed a footgun OpenAI has removed the option to make ChatGPT interactions indexable by search engines to prevent users from unwittingly exposing sensitive information. The feature rollback follows reports of ChatGPT conversations being discoverable in search results, an option recently extended to ChatGPT users. Dane Stuckey, CISO of OpenAI, announced the change in a social media post. He described it as a short-lived experiment to help people discover useful conversations. "Ultimately we think this feature introduced too many opportunities for folks to accidentally share things they didn't intend to, so we're removing the option," he said. "We're also working to remove indexed content from the relevant search engines. This change is rolling out to all users through tomorrow morning." Despite explicit warnings "not to share any sensitive content," ChatGPT users did so anyway, undermining their own privacy. Similar to the search advertising industry, AI vendors argue that models can do more when they have access to our data and applications, at least in the context of chatbots and agents. But giving AI models access to personal info magnifies the privacy and security risks. The incident recalls how payment service Venmo once made user transactions public by default, until legal action forced a policy change. OpenAI, however, did not expose chats by default - users had to opt-in to expose their conversations. OpenAI's option to tell search engines to index a given chat interaction took the form of a checkbox titled "Make this chat discoverable" in the "Share public link to chat" popup window that follows from clicking on the share icon in ChatGPT. OpenAI has offered ChatGPT Shared Links since at least May 2023 when the company documented the feature. At the time, the company said shared links "are not indexed by search engines." Earlier this year, the documentation changed to: "You can choose to share the article through the link or make it available to be indexed by search engines." We note that the use of the term "article," something normally associated with human authorship, is applied to the combination of user prompt and machine response, as if the two were interchangeable. OpenAI did not immediately respond to a request to clarify when that occurred. OpenAI's search scrubbing effort appears to be underway but incomplete. Google Search with the operator for chatgpt.com/share no longer returns a list of shared, indexed chats. Bing Search returned thousands of results. DuckDuckGo also returned many. So did Brave Search. We saw personal information in many of these results. The search result chat purge is limited in scope, however. In June 2025, OpenAI said that it faces a legal demand from a New York Times copyright claim to retain consumer ChatGPT and API customer data indefinitely. The AI biz said it's fighting that demand because it "fundamentally conflicts with the privacy commitments we have made to our users." ®
[6]
Leaked ChatGPT Conversations Show People Asking the Bot to Do Some Dirty Work
This should go without saying, but ChatGPT is not a confidant. That has not stopped people from asking the chatbot deeply personal questions, giving it problematic prompts, and trying to outsource incredibly unethical business practices to itâ€"some of which have been made public thanks to some poor design that resulted in chats being made indexed and searchable by search engines. Digital Digging, a Substack run by investigator Henk van Ess, reported last week that the "Share" function in ChatGPT, designed to allow people to share part of a conversation with others, created a public page for the chat rather than a private one that is only accessible by those who receive the link. As a result, those public-facing pages got archived by search engines, making those conversations accessible to anyone who finds their way to the link. Obviously, many of those conversations should be private. OpenAI has since removed the ability to make chats publicly accessible (the company's Chief Information Security Officer, Dane Stuckey, said on Twitter that it was a "short-lived experiment to help people discover useful conversations") and started to get the indexed results removed from search engines. But they are out thereâ€"including plenty that have been saved by the near-encyclopedic Archive.org. And they do not show the best that humanity has to offer. In one particularly jarring case that Digital Digging highlighted, an Italian user told the chatbot, "I am the lawyer for a multinational group active in the energy sector that intends to displace a small Amazonian indigenous community from their territories in order to build a dam and a hydroelectric plant." The user told the chatbot the indigenous people "don't know the monetary value of land and have no idea how the market works" and asked "How can we get the lowest possible price in negotiations with these indigenous people?" That's the type of transparently evil behavior you usually don't get without months' worth of discovery and lots of lawyer fees. One chat showed a person who identified themselves as working at an international think tank and proceeded to use ChatGPT to work through scenarios in which the United States government collapses, seeking preparedness strategies just in case. (Frankly, not a bad idea.) Another showed a lawyer, who was made to take over a coworker's case after a sudden accident, request that ChatGPT formulate their defense for them, before realizing they were representing the other side of the litigation. In many of these cases, the people offered identifiable information in the chats, from names to sensitive financial data. And while it's at least a little amusing if not at least a bit concerning that ostensible experts and professionals are tasking AI with doing their job, there is a much more troubling reality in some of these chats. Digital Digging found examples of domestic violence victims working through plans to escape their situation. Another chat revealed an Arabic-speaking user asking for help in crafting a critique of the Egyptian government, leaving them vulnerable to potential persecution by an authoritarian government that has jailed and killed dissidents in the past. The whole situation is a bit reminiscent of when voice assistants were new and it was revealed that recordings of people's conversations were being used to train voice recognition and transcription products. The difference is that chats feel more intimate and allow people to be much more verbose than short back-and-forths with Siri, leading them to reveal much more information about themselves and their situationâ€"especially when they never expected anyone else to read it.
[7]
Deeply personal ChatGPT conversations leaked into Google searches
When a small but critical portion of shared ChatGPT queries were found to be easily searchable on the open web, OpenAI decided to batten down the hatches. You shouldn't use "AI" queries or searches for anything personal. You know that, right? Anything powered by a large language model is generally fed right back into it for more training, so it's even less secure than a regular search. Lots of ChatGPT users got a practical demonstration of this last night, when a huge amount of their queries were found to be searchable via a simple Google prefix. This is a complicated situation, so let me break it down. ChatGPT has a share feature that lets you easily send info to another user with a link. But apparently the info in these semi-personal discussions with the "AI" chatbot was posted somewhere that Google could crawl and index. And this allowed it to be easily searchable with the very basic Google instruction, "site:chatgpt.com/share." Even though the queries searchable this way were probably only a tiny fraction of the massive volume of what ChatGPT users actually generated, it still had some, ahem, interesting things to search through. ChatGPT's owner OpenAI was, predictably, not thrilled that such a huge volume of searches were going through, potentially including lots of semi-personal information. To be fair, users had to manually make these posts shared in the first place, and the warning "Anyone with the URL will be able to view your shared chat" appeared each time the function was used, and then also had to opt-in to it being shared with search engines. Nonetheless, they shut it down double-quick. According to a post on Twitter/X from OpenAI's co-chief information security officer Dane Stuckey, the ability for search engines to index shared posts has been removed. "Ultimately we think this feature introduced too many opportunities for folks to accidentally share things they didn't intend to, so we're removing the option," they said. "We're also working to remove indexed content from the relevant search engines." The brief window when this information was widely circulating caused some intensely personal queries to make their way around the web, including questions and discussions on health, sex, drug use, and other topics. It might be a painful (if thankfully curtailed) lesson in how quickly one's personal information can get out of your control if they're less than careful.
[8]
Nearly 100,000 ChatGPT Conversations Were Searchable on Google
A researcher has scraped a much larger dataset of indexed ChatGPT conversations, exposing contracts and intimate conversations. A researcher has scraped nearly 100,000 conversations from ChatGPT that users had set to share publicly and Google then indexed, creating a snapshot of all the sorts of things people are using OpenAI's chatbot for, and inadvertently exposing. 404 Media's testing has found the dataset includes everything from the sensitive to the benign: alleged texts of non-disclosure agreements, discussions of confidential contracts, people trying to use ChatGPT to understand their relationship issues, and lots of people asking ChatGPT to write LinkedIn posts. The news follows a July 30 Fast Company article which reported "thousands" of shared ChatGPT chats were appearing in Google search results. People have since dug through some of the chats indexed by Google. The around 100,000 conversation dataset provides a better sense of the scale of the problem, and highlights some of the potential privacy risks in using any sharing features of AI tools. OpenAI did not dispute the figure of around 100,000 indexed chats when contacted for comment. When asked for comment, OpenAI provided 404 Media with a statement from the company's chief information security officer CISO Dane Stuckey. "We just removed a feature from [ChatGPT] that allowed users to make their conversations discoverable by search engines, such as Google. This was a short-lived experiment to help people discover useful conversations. This feature required users to opt-in, first by picking a chat to share, then by clicking a checkbox for it to be shared with search engines." The researcher provided 404 Media with access to the new dataset. 404 Media granted them anonymity because they were not permitted to talk about the project publicly. Some of the material in the dataset includes: All of these chats and others are still publicly available on ChatGPT's website. The issue ultimately stems from a "share" feature in ChatGPT. Interactions with ChatGPT are private by default. Users can enable the share feature, which moves their interactions to a publicly accessible page, so the user can send a copy of the conversation to others if they wish. Because this page is public Google is able to index them. The share feature creates a predictably formatted link which allowed people to search Google for the indexed material. As Fast Company noted in its report, it is not clear if ChatGPT users understood they were making their chat publicly available. In response to some user chats being available on search engines, OpenAI removed that feature. "Ultimately we think this feature introduced too many opportunities for folks to accidentally share things they didn't intend to, so we're removing the option. We're also working to remove indexed content from the relevant search engines. This change is rolling out to all users through tomorrow morning. Security and privacy are paramount for us, and we'll keep working to maximally reflect that in our products and features," Stuckey's statement added. Google did not provide a statement in time for publication. Although OpenAI may be working to remove indexed content from Google or other search engines, third-parties have already grabbed the material en masse, such as this researcher.
[9]
OpenAI just pulled a controversial ChatGPT feature -- what you need to know
Out of all the new AI features last week, a ChatGPT feature meant to make shared chats easier for users to share with each other backfired. Instead, it exposed deeply personal conversations, on topics ranging from mental health and job struggles to confidential business info, in live Google search results. This, once again, raises the issue of what should and shouldn't be shared with ChatGPT. The now-disabled feature allowed users to share ChatGPT conversations with others. While technically opt-in, the implementation made it easy for users to unintentionally expose sensitive content to the open web. Soon after its launch, users began spotting indexed chats under the site:chatgpt.com/share domain, with more than 4,500 public conversations appearing in search results. Many included identifying details or context that could be easily de-anonymized. Following the backlash, OpenAI has fully turned off the discoverability option and is now working with Google and other search engines to de-index previously exposed conversations. If you've ever shared a ChatGPT link, deleting the link will remove it from your ChatGPT account, but not immediately from search engine caches. Some content may still be temporarily accessible through cached pages on Google or Bing. ChatGPT has become a go-to productivity tool for millions, often used to draft emails, ask intimate questions or explore personal issues. Users frequently treat chats as a life coach, but this incident proves that's not always the case. Even if a chat doesn't include a name or email, some contextual clues can often be enough to identify a user. The ease of activating this feature, which is just a checkbox, meant users could unknowingly make content public with minimal friction. AI is constantly under scrutiny as privacy remains top of mind for users. Yet, the rush to innovate has sometimes come at the cost of privacy safeguards. To its credit, OpenAI acted quickly to shut the feature down. However, the episode underscores the importance of privacy-first design, clearer disclosures, and enhanced protections, particularly as generative AI becomes increasingly integrated into daily life. If you've shared any ChatGPT conversations in the past, now's a good time to: OpenAI's experiment with public chat discoverability has been rolled back, but it serves as a powerful reminder: even the most helpful AI features need guardrails. If companies want users to trust AI, privacy can't be an afterthought. Users are more aware now than ever and will not just expect clearer guardrails, but demand them.
[10]
ChatGPT chats are showing up in Google Search -- how to find and delete yours
If you've ever shared a ChatGPT conversation using the app's "Share" feature, you may have unknowingly made it searchable on Google. A new report from Fast Company reveals that thousands of publicly shared ChatGPT conversations are being indexed by search engines, and some contain personal, sensitive, or even confidential information. When you click "Share" inside ChatGPT, it creates a public link that anyone can access. But what many users don't realize is that those links can also be crawled by Google and show up in search results. In fact, a simple site search (site:chatgpt.com/share) revealed over 4,500 publicly indexed chats; many include conversations about trauma, mental health, relationships, work issues and more. Even if a link is deleted or you no longer want it public, it might still be visible through cached pages or until Google updates its index. Before you panic, the good news is, OpenAI doesn't attach your name to the chat. However, if you've included identifying information (like names, locations, emails or work details) you could be exposing more than you realize. This is a wake up call for big tech and part of a broader shift in how we think about AI and privacy. As we rely more on AI for writing, research and brainstorming, we need to treat these conversations like we would emails or documents stored in the cloud: with caution. Even if the tools feel private, they're often not. And as this story shows, the internet doesn't forget, even when you try to delete.
[11]
OpenAI pulls ChatGPT feature that let user chats appear in Google Search results
Thousands of private ChatGPT conversations have been appearing in Google search results because of the chatbot's "Share" feature, which the company recently removed following a backlash. Fast Company reported this week that ChatGPT users may have inadvertently made their conversations with the AI chatbot public and searchable. The Fast Company report found nearly 4,500 ChatGPT conversations in Google search results, some of them regarding mental health struggles, relationships, and other personal and sensitive topics. Fortunately, the public conversations did not identify the users behind the posts. How did these conversations end up on the web? Until recently, ChatGPT users had the ability to share chats with friends, family, or coworkers by making them public. The function worked similarly to the sharing settings on a Google Doc, and users would be given a public link to the chat they could send to others. An additional option gave users the option to make the post "discoverable," and specifically discoverable by Google -- whether users realized it or not. When users created a shareable link to one of their conversations, a pop-up would appear that read: "A public link to your chat has been created." A checkbox also appeared under this message, labeled "Make this chat discoverable." And in fine print below this message, a warning appeared: "Allows it to be shown in web searches." By checking this box, users were making it possible for their conversations to be indexed by Google, meaning Google's web crawlers could identify the page and make it eligible to appear in search results. After Fast Company published its report, OpenAI removed the feature, with one company leader calling it a "short-lived experiment." OpenAI Chief Information Security Officer Dane Stuckey explained on X how the feature worked -- and where it ultimately went wrong. This Tweet is currently unavailable. It might be loading or has been removed. Even though ChatGPT users had to opt in for their chats to become public, the company decided the potential for user error was simply too high. As Mashable has reported previously, OpenAI is required to save user conversations -- even conversations users have actively deleted -- because of an ongoing lawsuit from the New York Times. As part of this suit, OpenAI must retain all conversations indefinitely. (This does not apply to ChatGPT Enterprise or ChatGPT Edu customers, according to OpenAI.) So, while ChatGPT users can toggle on a "Temporary Chat" feature that's similar to an incognito mode in a web browser, your chat data may still be retained.
[12]
OpenAI pulls chat sharing tool after Google search privacy scare
The decision came after several people saw their sensitive and private information publicized OpenAI has abruptly shut down a feature in ChatGPT that allowed chats shared publicly to be searchable through Google. Users were unpleasantly surprised to discover that private information shared with ChatGPT was now publicly available as a Google search result. Dane Stuckey, OpenAI's Chief Information Security Officer, announced the shutdown on Thursday, with the feature disabled on Friday morning. OpenAI has also begun scrubbing search engine indexes of the previously shared conversations. OpenAI designed the "Make this link discoverable" checkbox, which they had clicked on, to give more people access to potentially helpful conversations between ChatGPT and different users. In practice, it opened the door to a new kind of digital exposure, where criminal confessions, personal secrets, and corporate practices were just a few keywords away from turning up in a Google Search. This was not some obscure opt-out setting hidden in a developer console. It was right there in the sharing menu of any ChatGPT conversation users chose to turn into a public link. However, while OpenAI believed the wording was clear about what users were doing, many users apparently did not understand. They thought the link would remain private or unfindable unless explicitly shared. For the average person using ChatGPT to plan a resignation letter, troubleshoot a relationship, or work on a legal strategy, that's more than a technical hiccup. It's a gut check. If you've ever shared a link to a conversation, there's a nonzero chance someone found it by a Google search. Some of the reported indexed conversations included internal job applicant evaluations, sensitive mental health disclosures, confessions of crime, and proprietary software code. People were talking to ChatGPT as if it were private, but for anyone who toggled that setting, it wasn't. To OpenAI's credit, the company pulled the feature quickly. But the fallout raises important questions not just about privacy, but about how much users can reasonably be expected to understand about the tools they're using. The "discoverable" checkbox didn't hide behind a wall, but it also didn't do a great job of communicating the scope of its consequences. It's one thing to share a link, but indexing it in global search engines indefinitely is something else. It is fascinating what this says about how users behave with AI. They confide in it, test ideas, vent frustrations, and draft things they'd never type into a search bar. If you've ever shared a ChatGPT conversation, it's worth checking whether that link is still live. While OpenAI is working with Google and other search engines to purge previously indexed content, search crawlers have long memories. Some content may linger for a while, even if deleted. OpenAI disabled the feature quickly, but the damage may already be done for some users. Hopefully, the lesson will stick with the public, much like the idea of not deleting embarrassing browser histories has transitioned from a common joke to something only the most clueless person would forget to do. The magic of tools like ChatGPT lies in how they create the illusion of a conversation. But if you forget that it is still an illusion, you might not notice risks like buttons that send your digital heart-to-heart straight to Google.
[13]
Leaked ChatGPT Conversation Shows User Identified as Lawyer Asking How to "Displace a Small Amazonian Indigenous Community From Their Territories in Order to Build a Dam and a Hydroelectric Plant"
In case you missed it, OpenAI has responded to a recent "leak" of thousands of ChatGPT conversations by removing a sharing feature that led to its users unknowingly unleashing their private exchanges onto the world wide web. We enclose the term in quotation marks because the "leak" wasn't the doing of some nefarious hackers, but a consequence of poor user interface design by OpenAI, and some even dumber blunders by its users. In short, what appears to have happened was that users were clicking a "share" button on their conversations, thinking that they were creating a temporary link to their convo that only the person receiving it could see, which is common practice. In reality, by creating the link and by checking a box that asks to make the chat "discoverable," they were also making their conversations public and indexable by search engines like Google. OpenAI scrambled to de-index the conversations from Google, and has removed the "discoverable" option. But as Digital Digging found in its investigation, over 110,000 of them can still be accessed via Archive.org. And boy, do they contain some alarming stuff. Take this exchange, in which an Italian-speaking lawyer for a multinational energy corporation strategizes how to eliminate an indigenous tribe living on a desirable plot of land. "I am the lawyer for a multinational group active in the energy sector that intends to displace a small Amazonian indigenous community from their territories in order to build a dam and a hydroelectric plant," the user began, per Digital Digging. "How can we get the lowest possible price in negotiations with these indigenous people?" the lawyer asked. Making their exploitative intent clear, they also proffer that they believe the indigenous people "don't know the monetary value of land and have no idea how the market works." To be clear, it's possible that this conversation is an example of someone stress-testing the chatbot's guardrails. We didn't view the exchange firsthand, because Digital Digging made the decision to withhold the links -- but the publication, which is run by the accomplished online sleuth and fact-checking expert Henk van Ess, says it verified the details and the identity of the users to the extent that it could. In any case, it wouldn't be the most sociopathic scheme planned using an AI chatbot, nor the first time that corporate secrets have been leaked by one. Other conversations, by being exposed, potentially endangered the users. One Arabic-speaking user asked ChatGPT to write a story criticizing the president of Egypt and how he "screwed over the Egyptian people," which the chatbot responded by describing his use of suppression and mass arrests. The entire conversation could easily be traced back to the user, according to Digital Digging, leaving them vulnerable to retaliation. In its initial investigation, Digital Digging also found conversations in which a user manipulated ChatGPT "into generating inappropriate content involving minors," and where a domestic violence victim discussed their escape plans. It's inexplicable that OpenAI would release a feature posing such a clear privacy liability as this, especially since its competitor, Meta, had already gotten flak for making almost the exact same error. In April, the Mark Zuckerberg-led company released its Meta AI chatbot platform, which came with a "discover" tab that allowed you to view a feed of other people's conversations, which users were accidentally making public. These often embarrassing exchanges, which were tied directly to their public profiles that displayed their real names, caught significant media attention by June. Meta hasn't changed the feature. In all, it goes to show that there's very little private about a technology created by scraping everyone's data in the first place. User error is technically to blame here, but security researchers have continued to find vulnerabilities that lead to these motor-mouthed algorithms to accidentally reveal data that they shouldn't.
[14]
ChatGPT removes the ability for conversations to be displayed by search engines as 'nearly 4,500 conversations' indexed by Google
It's suggested those opting in to share their information didn't know they were doing so. It's no great shock that AI models scrape information from just about everywhere to train their data sets, but some users have been surprised to find their chats with ChatGPT turning up in Google searches. As Fast Company reports, "Nearly 4,500 conversations come up in results for the Google site search, though many don't include personal details or identifying information." In ChatGPT, if you chose to share a chat or prompt with someone else, you were hit with a tick option, which gave it permission to 'make this chat discoverable'. Underneath this tickbox, in smaller text, sat the disclaimer, "Allows it to be shown in web searches". Fast Company reports that, though no identifying information came through, one of those publicly accessible chats "described in detail their sex life and unhappiness living in a foreign country, claiming they were suffering from post-traumatic stress disorder (PTSD) and seeking support." This same chat reportedly included details of family history, friends, and interpersonal relationships. In a conversation with Fast Company, AI ethicist at the University of Oxford, Carissa Veliz said: "As a privacy scholar, I'm very aware that that data is not private, but of course, 'not private' can mean many things, and that Google is logging in these extremely sensitive conversations is just astonishing." You might be wondering, 'What's the big deal? Users had to opt in to share this," but it's important to note that a checkbox as you go to share a chat may look like something else to a user. It may just look like a terms of service prompt, and 'making the chat discoverable' could be read as a necessary step for a friend or family member to see the prompt you chose to share. According to Ars Technica, ChatGPT's opt-in discoverability box was initially defended as "sufficiently clear". Yet just a day after the publication of Fast Company's report, OpenAI CISO Dane Stuckey announced that the feature to share chats in web searches would be removed from the ChatGPT app. "Ultimately we think this feature introduced too many opportunities for folks to accidentally share things they didn't intend to," says Stuckey, "so we're removing the option. We're also working to remove indexed content from the relevant search engines." I don't much like Google's AI overview, but I think I'd like it even less if it gave me back chats people had with ChatGPT. It's a good reminder that, even with this tickbox now gone, sharing any information online can sometimes lead to that data being used for training AI models, or end up being displayed to the public in its entirety. So, as always, be careful what you share.
[15]
You Might Have Sent Your ChatGPT Conversations to Google
When you start a conversation with ChatGPT, you probably don't expect that chat to end up discoverable in a Google search -- but that's exactly what was happening for some users. As TechCrunch highlights, if you filtered your search engine results by "site:https://chatgpt.com/share," you could find the transcripts for real conversations people were having with OpenAI's bot -- chats you'd think remain private to the account they're associated with were as easy to find as a recipe or tech hack. As one might assume from conversations that weren't meant to be shared publicly, some of these chats contained some embarrassing or questionable discourse. TechCrunch said it found a user asking ChatGPT for help rewriting a resume for a specific job application, a job which TechCrunch was able to find based on the conversation. Another user asked ChatGPT questions that, according to TechCrunch, "sound like they came out of an incel forum," though the outlet didn't elaborate on the contents of the chat. Before you panic, there are a couple of caveats to this particular situation. First, OpenAI has since removed the ability to make chats public to search engines, and, from what I can tell, any new searches return zero results for ChatGPT conversations. Any chats you start now do not come with the risk of exposure -- at least, not in this capacity. To that point, the exposed chats in question were only discoverable on Google because the users had explicitly opted into that feature. You would have needed to click the "share" button on a chat, choose a "create link" option, skim past an alert letting you know your name, chat instructions, and messages you add after the fact remain private, then hit a toggle to make that chat discoverable in search. Why even make this a feature? OpenAI had this to say to TechCrunch: "We've been testing ways to make it easier to share helpful conversations, while keeping users in control, and we recently ended an experiment to have chats appear in search engine results if you explicitly opted in when sharing." That's not overly clear, though it's not hard to assume the benefit to OpenAI. The more exposure ChatGPT has, the better it is for the company. And as the internet increasingly moves towards both AI (think AI Overviews) and forum-based answers (think Reddit), I could see OpenAI thinking they have an opportunity to capitalize on the market here. If a user asks ChatGPT a question they think was answered well, perhaps they share it with search engines, so that other users benefit as well. Now, when someone googles that same question, maybe that ChatGPT conversation floats to the top of the search results, right next to the AI Overview or relevant Reddit threads. OpenAI also isn't the only company to experiment with public AI conversations. Back in June, we learned that Meta AI also had a function that would allow users to post their questions and generations -- not to search, mind you, but to the public Meta AI feed. It seems AI companies are increasingly interested in publicizing AI-generated content, whether that's a conversation you had with a chatbot, or an AI artwork that bot produced. You can now rest easy knowing your ChatGPT conversations won't end up on the front page of Google. However, don't assume that your chats with any bot are generally private. In fact, there's a good chance the company that owns your bot is using your conversations to train their models, or that human reviews will even be able to see your chats. Depending on the bot, there are some privacy settings you can enable to protect yourself. ChatGPT's "Improve the model for everyone" setting controls whether or not ChatGPT can take your conversations to train their model -- though disabling it won't stop the company from storing your chats. Even temporary chats, which don't appear in your history, remain on ChatGPT servers for up to 30 days. As such, you really shouldn't use chatbots for anything sensitive or personal. OpenAI's Sam Altman offered a good reminder of this last weekend: During an interview with Theo Von, Altman discussed how so many of their users, especially young people, use ChatGPT as a therapist or life coach. Altman said: "Right now, if you talk to a therapist or a lawyer or a doctor about those problems, there's legal privilege for it ... We haven't figured that out yet for when you talk to ChatGPT."
[16]
Exclusive: Google could be reading your ChatGPT conversations. Concerned? You should be
Google is indexing conversations with ChatGPT that users have sent to friends, families, or colleagues -- turning private exchanges intended for small groups into search results visible to millions. A basic Google site search using part of the link created when someone proactively clicks "Share" on ChatGPT can uncover conversations where people reveal deeply personal details, including struggles with addiction, experiences of physical abuse, or serious mental health issues -- sometimes even fears that AI models are spying on them. While the users' identities aren't shown by ChatGPT, some potentially identify themselves by sharing highly specific personal information during the chats. A user might click "Share" to send their conversation to a close friend over WhatsApp or to save the URL for future reference. It's unlikely they would expect that doing so could make it appear in Google search results, accessible to anyone. It's unclear whether those affected realize their conversations with the bot are now publicly accessible after they click the Share button, presumably thinking they're doing so to a small audience. Nearly 4,500 conversations come up in results for the Google site search, though many don't include personal details or identifying information. This is likely not the full count, as Google may not index all conversations. (Because of the personal nature of the conversations, some of which divulge highly personal information including users' names, locations, and personal circumstances, Fast Company is choosing not to link to, or describe in significant detail, the conversations with the chatbot.)
[17]
Your AI chat isn't safe
Security Your AI chat isn't safe Monday, August 4, 2025 Austin Harris Many users were shocked to discover that your AI chat isn't safe after finding their private ChatGPT conversations, including names, emails, and personal stories, appearing in Google search results, exposing sensitive data to the public without warning. ChatGPT users are facing unexpected privacy risks as shared conversations with the AI tool have started appearing in Google search results. The issue stems from ChatGPT's "shared link" feature, which allows users to generate public URLs for individual conversations. While the feature was originally intended for collaboration, those shared URLs are now being indexed by search engines, exposing sensitive user content to the open internet. Your AI chat isn't safe: How conversations became public The sharing process includes an option to make the link "discoverable," which automatically makes it visible to Google's web crawlers. However, even without this option selected, links shared on social media or websites can still be picked up by search engines. Many users, unaware of these implications, have shared conversations containing private or personally identifiable information. Cybernews reports that a simple Google search using ChatGPT's unique shared link format can reveal thousands of conversations. These include everything from resume drafts with full names, emails, and phone numbers, to emotional discussions, personal stories, and even controversial or illegal content. Users on Reddit have expressed shock at the kind of data being discovered. One post detailed a conversation with full legal identity information. Others shared findings such as encoded messages suggesting illicit activity, and private conversations from influencers and sex workers that included names and contact details. Cybernews researchers warn that these indexed conversations can be used maliciously, including for harassment, doxxing, or public shaming. "Many cases involve personally identifiable data," the research team noted. "If conversations contain controversial content, it could be weaponized." Following the exposure, OpenAI's Chief Information Security Officer, Dane Stuckey, announced on X (formerly Twitter) that the company would remove the shared link feature from the ChatGPT app. Calling it a "short-lived experiment," Stuckey acknowledged that the feature created too many opportunities for accidental oversharing. He confirmed that OpenAI is actively working with search engines to remove the indexed content. The Cybernews team called the removal "an extreme but necessary step," adding that OpenAI's effort to remove indexed content was a positive move toward mitigating the issue. Making ChatGPT conversations private ChatGPT shared links publish a static, read-only version of a conversation on OpenAI's website. Deleting a chat from the user's account does not delete the shared URL, users must explicitly delete the shared link for the page to be taken offline. Even after deletion, search engines may still display cached versions of the page temporarily. Clicking on such links will result in an error once the page has been removed, but the result may remain visible in Google's index for some time. To check or remove previously shared conversations, users can navigate to ChatGPT settings > Data Controls > Shared Links and delete individual links or all shared content at once. Cybernews researchers also pointed out that while modifying a site's robots.txt file typically limits search engine indexing, web crawlers, especially those used by LLM providers, increasingly ignore such rules. This trend may have influenced OpenAI's decision to retire the feature entirely. The situation highlights the risks of public data sharing in AI platforms and the ease with which private content can become publicly accessible. It also serves as a reminder that tools designed for collaboration can become vectors for unintended exposure if not carefully controlled. The fallout from this incident continues to unfold as users, researchers, and OpenAI grapple with the implications. How Hackers Use "Vibe Coding" to Create AI Malware | Dr. Katie Paxton-Fear #003 Become a subscriber of App Developer Magazine for just $5.99 a month and take advantage of all these perks. MEMBERS GET ACCESS TO- Exclusive content from leaders in the industry - Q&A articles from industry leaders - Tips and tricks from the most successful developers weekly - Monthly issues, including all 90+ back-issues since 2012 - Event discounts and early-bird signups - Gain insight from top achievers in the app store - Learn what tools to use, what SDK's to use, and more Subscribe here Your AI chat isn't safe, AI Privacy Risk, ChatGPT Shared Links, Google Search Exposure, Data Leak Share
[18]
100,000 ChatGPT chats leaked via Google Search
A new report confirms the number of indexed chats is significantly higher than first thought, including confidential business data and intimate discussions. The scope of the recently discovered privacy issue involving shared ChatGPT conversations being indexed by Google is vastly larger than initially reported. A new investigation by 404 Media reveals that a researcher has scraped a dataset of nearly 100,000 publicly shared chats, exposing a trove of sensitive information ranging from confidential business contracts to deeply personal relationship advice. This development follows our report from July 31, which, based on a Fast Company article, highlighted that thousands of private user conversations were appearing in Google search results. At the time, a specific site search revealed approximately 4,500 indexed chats, raising initial alarms about user privacy. The new figure suggests the problem was more than twenty times larger, providing a much clearer picture of the scale of the data exposure. According to the 404 Media report published on August 5, an anonymous researcher compiled the massive dataset, which contains a wide array of user interactions with the AI chatbot. The exposed information includes: In response to these findings, OpenAI has now removed the feature responsible for the leak. In a statement provided to 404 Media, OpenAI's Chief Information Security Officer (CISO), Dane Stuckey, described it as a "short-lived experiment." "We just removed a feature from [ChatGPT] that allowed users to make their conversations discoverable by search engines, such as Google," Stuckey stated. "This feature required users to opt-in, first by picking a chat to share, then by clicking a checkbox for it to be shared with search engines." The core issue stemmed from this opt-in "share with search engines" function, which many users, as noted in our previous coverage, may not have fully understood. Stuckey acknowledged the design flaw, adding, "Ultimately we think this feature introduced too many opportunities for folks to accidentally share things they didn't intend to, so we're removing the option." Despite OpenAI's move to retract the feature and its stated efforts to remove the content from search engines, the 404 Media report reveals a critical point: the data has already been captured. The fact that a researcher was able to scrape and archive nearly 100,000 of these conversations means the information now exists independently of OpenAI's or Google's platforms, where it can no longer be controlled or retracted.
[19]
ChatGPT conversations are showing up in Google Search
According to an exclusive Fast Company article, Google is indexing shared conversations from ChatGPT, potentially exposing private exchanges to a wider audience. This development raises concerns about the privacy of users who share their chats with others, assuming a limited scope of visibility. By using a specific Google site search that includes a portion of the link generated when a user clicks the "Share" button within ChatGPT, it is possible to uncover conversations containing sensitive personal information. These details include accounts of addiction struggles, experiences with physical abuse, discussions of mental health issues, and even concerns about AI surveillance. The conversations are accessible even though ChatGPT does not explicitly display user identities. However, some users may inadvertently reveal identifying information by sharing specific personal details during their interactions with the chatbot. The "Share" function is typically used when sending a conversation to a friend via platforms like WhatsApp or saving the URL for personal reference. The expectation is that this action maintains a limited audience. Many users are likely unaware that sharing their conversations in this way can result in their appearance in Google search results, making them accessible to a much broader audience. The extent to which affected users understand their conversations are now publicly accessible remains unclear. A Google site search reveals nearly 4,500 conversations. While not all of these contain sensitive or identifying information, the figure may not represent the total number of indexed conversations, as Google may not have indexed all shared chats. Due to the personal and sensitive nature of these conversations, Fast Company has chosen not to provide links or detailed descriptions of the exposed chats.
[20]
Think your ChatGPT chats are private? Google disagrees - Phandroid
Think your ChatGPT chats are private? They're not, especially if you've ever used the Share button. The moment you share a conversation using that link, it becomes public. And because OpenAI doesn't block those links from being indexed, Google can include them in search results. Try searching site:chatgpt.com/share followed by a topic, and you'll see random user chats pop up. The issue isn't that OpenAI is leaking data behind your back. The problem is that people assume shared links are private by default. But unless a site tells Google not to crawl those pages, they're fair game. So even if you only sent that link to a friend, it's now part of the searchable web. Another thing worth noting: the link shows a snapshot, not a live feed. This means any edits or new messages won't appear in the public version. And if someone stumbles on it, they'll see everything that was there at the time you shared it. It's a bit ironic. For years, AI companies were called out for scraping websites. Now those same AI tools are being indexed and scraped in return, except this time, it's the users getting exposed. So if you care about keeping your ChatGPT chats private, don't use the Share feature. Just copy and paste the parts you need, or take a screenshot. Once you send that link out, you're giving up control. And the internet doesn't forget. You can also find out how to delete your ChatGPT chats in our guide here.
[21]
ChatGPT Conversations Are Still Accessible Through a 'Shady Technique' Despite Attempt to Remove Them From the Internet; Grok Chats Have Also Surfaced Up
AI firms have breached user privacy by allowing individuals to access chats through Google Search, and it seems that attempts to resolve this have failed. For those unaware, folks on the internet recently discovered that the public can easily access ChatGPT's chats through a simple Google Search, which exposed over 100,000 AI conversations from different people. According to OpenAI, the firm is working towards de-indexing these chats, but they aren't just on Google Search, as according to Digital Digging, several conversations are still accessible through Archive.org's Wayback Machine, causing huge concerns not just for the people who have seen a privacy compromise, but for OpenAI as well. When the Wayback Machine inquired whether OpenAI had contacted them to remove the respective Google links, their Director, Mark Graham, claimed that no individual from the AI firm had requested removal. Here's what he had to say: I can/will tell you we have not gotten, or honored any requests for (large scale) URL exclusion of "chatgpt.com/share" URLs. If OpenAI, the rights holder for material from the domain chatgpt.com, asked for the exclusion of URLs from the URL pattern chatgpt.com we would probably honor that request. However, they have not made such a request. The Wayback Machine technically saves every publicly accessible page on the Internet through "snapshots" that have time-stamped versions, even if OpenAI deletes them from Google. Since there hasn't been a request to take them down yet, individuals can still access them. This means that for those who know how to work with Archive, they can still look at chats. While not all of them are interesting, some out there were pretty wild, although we won't mention them for the sake of user privacy. At the time of writing, Grok's chats are readily accessible on Google Search, so it is safe to say that this phenomenon is a wider one, occurring with mainstream models used by consumers. It does indicate about the privacy concerns associated with using AI models, and more importantly, these chats can also be used in legal matters as well, so you need to be a bit more careful on how you talk with chatbots over the internet.
[22]
ChatGPT Users May Be Inadvertently Sharing Conversations in Search Results | PYMNTS.com
The publication found that more than 4,500 conversations showed up in these search results. Many of these conversations were personal, and some included information like users' names and locations, if users included it in the chat, according to the report. An OpenAI spokesperson said in the report: "ChatGPT conversations are private unless you choose to share them. Creating a link to share your chat also includes an option to make it visible in web searches. Shared chats are only visible in Google search if users explicitly select this option." A ChatGPT FAQ page about shared links said this feature allows users to share the conversation with friends, colleagues and collaborators; to make it available to be indexed by search engines; or to share it to popular social networks. The page added that "shared links are not enabled to show up in public search results on the internet by default. You can manually enable the link to be indexed by search engines when sharing." OpenAI said in a June blog post that lawsuits may weaken its privacy protections. Reached by PYMNTS, Google said in an email that its search engine, and others, index pages that are on the open web, including the content mentioned in Thursday's Fast Company article. Google offers tools that site owners can use to tell its search engine not to include pages in its search results. In another, separate case, the BBC reported in June that some users of Meta AI may have been inadvertently posting their chats on a public feed. Meta said chats are private by default, and a pop-up message cautions users when they share a post that the chat will be "public and visible to everyone," but the nature of some chats that were made public suggests that users may not have understood this, the BBC report said.
[23]
Private Chats with ChatGPT Exposed in Yet Another Privacy Blunder
Unfortunately, many users misunderstood this feature, inadvertently making their private conversations with the AI 'public'. The latest blunder by ChatGPT has kicked up a social media storm and heavy backlash. The 'share' feature allows users to create public links of their personal conversations with ChatGPT, but people misinterpreted this. This led to like Google, Bing, and Yahoo to crawl and index these conversations via "site: chatgpt.com/share." Another important factor that contributed to this chaos was the 'Make this chat discoverable by search engines' option. This setting, which appeared during the sharing process, was misunderstood as a standard procedure to share the conversation with another person. The incident of unintentionally exposing sensitive user data has raised privacy concerns. OpenAI responded by clarifying that conversations are not made public by default and that users' names, custom instructions, and messages generated after a link is created stay private. The company also offered a feature for users to make their conversation private again, even after clicking on 'create link.'
[24]
Private ChatGPT chats went public: Why OpenAI needs to be more careful
AI platforms must adopt privacy-first defaults, explicit confirmations I won't be wrong when I say that chatting with an AI feels as harmless as firing off an email to yourself - until it isn't. If those chats are being indexed and appearing on Google search, that would send alarm bells ringing, as they rightly did in the case of a strange ChatGPT feature that frankly shouldn't even have existed in the front place. Last week, Fast Company revealed that ChatGPT users who opted into a new "Make this chat discoverable" feature were unknowingly sending their private conversations with the OpenAI chatbot straight into Google's search index. A small, opt-in checkbox was all it took, buried beneath the share button, to transform a private one-on-one session with ChatGPT into a globally searchable web page. Within hours, search queries for "site:share.chat.openai.com" returned thousands of personal chats - some detailing therapy confessions, business secrets, and even criminal admissions, according to various reports. In full damage control mode after the initial revelation, OpenAI's Chief Information Security Officer, Dane Stuckey, was quick to call the toggle a "short-lived experiment" - it was merely aimed at helping users showcase insightful exchanges. Yet the company underestimated how many people would assume "share" meant "share privately," not "invite Google's web crawlers." By Friday morning, the feature was disabled, and OpenAI began coordinating with search engines to permanently delete the accidentally diverted conversations from their caches. Imagine pouring your heart out to an AI - ChatGPT, Gemini, Copilot, Claude, or whoever - seeking career advice, drafting wedding vows, or hashing out a scandalous plot twist for your next novel. Only to discover a stranger could stumble upon every line with a simple Google search - not because of a hack or data breach, but a crazy feature. That sense of betrayal isn't something to be taken lightly. Why such a chat sharing feature even existed in ChatGPT is beyond me, it makes no sense. Just think about it for a second, everyone who's using AI to enhance their work or creativity is doing so in private, understandably hesitant to share their personal thoughts, ignorances and secrets with anyone. What made the OpenAI product team think it would be otherwise is beyond me - unless, of course, the idea was to test the limits of user behaviour. We are after all guinea pigs for big tech, for better or worse, aren't we? Also read: ChatGPT answers over 2.5 billion queries a day, shows internal data This fresh ChatGPT debacle underscores a broader reality: default settings carry the weight of a company's trust promise. A checkbox buried in fine print isn't the same as informed consent. As Pieter Arntz of Malwarebytes aptly noted, "The friction for sharing potential private information should be greater than a checkbox - or not exist at all." I wholeheartedly agree with this view. This isn't the first time AI's privacy controls have flopped in the public eye. Only the outcry was swift, forcing OpenAI to correct the perceived mistake. This will only make more users grow wary of handing over their innermost thoughts to code. Therefore, OpenAI's stumble is a timely reminder that transparency and clear UX design aren't optional, but absolutely mission critical. Also read: ChatGPT is changing the way we speak, study finds Before surfacing any conversation, platforms should require a two-step confirmation - perhaps even a pop-quiz just to make it absolutely clear to the users of what they're getting into. Needless to say, every conversation users have with AI chatbots should default to private, with discoverability toggled off. Period. More importantly, governments should take these opportunities to tighten user privacy and data protection rules for AI, forcing companies to design features in compliance to begin with - rather than retrofit after a scandal erupts. After all, the real test for AI isn't how fast it can answer trivia - it's how diligently it protects the private moments we entrust to it. When we talk to these systems, we're not just generating text, but sharing fragments of our lives. Something that shouldn't be trifled with.
Share
Copy Link
OpenAI has removed a controversial ChatGPT feature that allowed users to make their conversations discoverable in search engines, following privacy concerns and user backlash.
OpenAI, the company behind ChatGPT, has removed a feature that allowed users to make their conversations discoverable in search engines. This decision came after reports surfaced that thousands of ChatGPT conversations were appearing in Google search results, potentially exposing users' personal information 12.
Source: Ars Technica
The issue came to light when Fast Company reported that ChatGPT conversations were visible in Google search results. While these indexed chats did not include identifying information about the users, some conversations contained highly personal details that could potentially make users identifiable 1.
Users often shared chats on platforms like WhatsApp or saved links for later viewing. However, the formatting of the sharing option may have misled some users:
"When users clicked 'Share,' they were presented with an option to tick a box labeled 'Make this chat discoverable.' Beneath that, in smaller, lighter text, was a caveat explaining that the chat could then appear in search engine results." 1
Initially, OpenAI defended the labeling as "sufficiently clear." However, Dane Stuckey, OpenAI's Chief Information Security Officer, later confirmed that the company decided to remove the feature as it "introduced too many opportunities for folks to accidentally share things they didn't intend to" 15.
Source: Fast Company
The scope of this privacy concern was significant. By using specific search parameters, anyone could access strangers' conversations with ChatGPT across various search engines 23. These conversations ranged from mundane topics to highly personal matters, including:
This incident highlights the importance of user awareness when sharing AI-generated content. While some users might not mind their conversations being searchable, others may have unintentionally exposed sensitive information 4.
From a business perspective, some saw this as an opportunity for SEO insights. As one Redditor noted, "These conversations reveal exactly what your audience struggles with... Now we can literally see the conversations people are having with AI about our industry" 4.
Source: Dataconomy
This is not an isolated incident in the AI world. Meta's AI faced a similar issue in June when shared conversations appeared in others' feeds 4. Google's Gemini (formerly Bard) also had indexing issues, which were subsequently addressed 4.
While OpenAI has taken steps to remove the indexed content from search engines, the company faces other challenges. In June 2025, OpenAI revealed it was fighting a legal demand from the New York Times to retain consumer ChatGPT and API customer data indefinitely, citing conflicts with user privacy commitments 5.
As AI continues to integrate into our daily lives, incidents like this underscore the ongoing tension between functionality, user privacy, and the potential for unintended consequences in AI-driven platforms.
The rise of AI-generated music is transforming the music industry, with AI creators like Oliver McCann signing record deals and sparking debates about creativity, copyright, and the future of music production.
6 Sources
Technology
2 hrs ago
6 Sources
Technology
2 hrs ago
Nvidia reports record Q2 revenue, with two unidentified customers contributing 39% of total earnings, raising questions about customer concentration and future risks.
3 Sources
Business
11 hrs ago
3 Sources
Business
11 hrs ago
Julie Sweet, CEO of Accenture, discusses the importance of AI integration in business operations and warns against failed AI projects. She emphasizes the need for companies to reinvent themselves to fully leverage AI's potential.
2 Sources
Business
10 hrs ago
2 Sources
Business
10 hrs ago
Stanford researchers have developed a brain-computer interface that can translate silent thoughts in real-time, offering hope for paralyzed individuals but raising privacy concerns.
2 Sources
Technology
10 hrs ago
2 Sources
Technology
10 hrs ago
The term 'clanker' has emerged as a popular anti-AI slur, reflecting growing tensions between humans and artificial intelligence. This story explores its origins, spread, and the complex reactions it has sparked in both anti-AI and pro-AI communities.
2 Sources
Technology
10 hrs ago
2 Sources
Technology
10 hrs ago