20 Sources
[1]
Sam Altman warns there's no legal confidentiality when using ChatGPT as a therapist | TechCrunch
ChatGPT users may want to think twice before turning to their AI app for therapy or other kinds of emotional support. According to OpenAI CEO Sam Altman, the AI industry hasn't yet figured out how to protect user privacy when it comes to these more sensitive conversations, because there's no doctor-patient confidentiality when your doc is an AI. The exec made these comments on a recent episode of Theo Von's podcast, This Past Weekend w/ Theo Von. In response to a question about how AI works with today's legal system, Altman said one of the problems of not yet having a legal or policy framework for AI is that there's no legal confidentiality for users' conversations. "People talk about the most personal sh** in their lives to ChatGPT," Altman said. "People use it -- young people, especially, use it -- as a therapist, a life coach; having these relationship problems and [asking] 'what should I do?' And right now, if you talk to a therapist or a lawyer or a doctor about those problems, there's legal privilege for it. There's doctor-patient confidentiality, there's legal confidentiality, whatever. And we haven't figured that out yet for when you talk to ChatGPT." This could create a privacy concern for users in the case of a lawsuit, Altman added, because OpenAI would be legally required to produce those conversations today. "I think that's very screwed up. I think we should have the same concept of privacy for your conversations with AI that we do with a therapist or whatever -- and no one had to think about that even a year ago," Altman said. The company understands that the lack of privacy could be a blocker to broader user adoption. In addition to AI's demand for so much online data during the training period, it's being asked to produce data from users' chats in some legal contexts. Already, OpenAI has been fighting a court order in its lawsuit with The New York Times, which would require it to save the chats of hundreds of millions of ChatGPT users globally, excluding those from ChatGPT Enterprise customers. In a statement on its website, OpenAI said it's appealing this order, which it called "an overreach." If the court could override OpenAI's own decisions around data privacy, it could open the company up to further demand for legal discovery or law enforcement purposes. Today's tech companies are regularly subpoenaed for user data in order to aid in criminal prosecutions. But in more recent years, there have been additional concerns about digital data as laws began limiting access to previously established freedoms, like a woman's right to choose. When the Supreme Court overturned Roe v. Wade, for example, customers began switching to more private period-tracking apps or to Apple Health, which encrypted their records. Altman asked the podcast host about his own ChatGPT usage, as well, given that Von said he didn't talk to the AI chatbot much due to his own privacy concerns. "I think it makes sense ... to really want the privacy clarity before you use [ChatGPT] a lot -- like the legal clarity," Altman said.
[2]
Even OpenAI's CEO Says Be Careful What You Share With ChatGPT
Expertise Artificial intelligence, home energy, heating and cooling, home technology. Maybe don't spill your deepest, darkest secrets with an AI chatbot. You don't have to take my word for it. Take it from the guy behind the most popular generative AI model on the market. Sam Altman, the CEO of ChatGPT maker OpenAI, raised the issue this week in an interview with host Theo Von on the This Past Weekend podcast. He suggested that your conversations with AI should have similar protections as those you have with your doctor or lawyer. At one point, Von said one reason he was hesitant to use some AI tools is because he "didn't know who's going to have" his personal information. "I think that makes sense," Altman said, "to really want the privacy clarity before you use it a lot, the legal clarity." More and more AI users are treating chatbots like their therapists, doctors or lawyers, and that's created a serious privacy problem for them. There are no confidentiality rules and the actual mechanics of what happens to those conversations are startlingly unclear. Of course, there are other problems with using AI as a therapist or confidant, like how bots can give terrible advice or how they can reinforce stereotypes or stigma. (My colleague Nelson Aguilar has compiled a list of the 11 things you should never do with ChatGPT and why.) Altman's clearly aware of the issues here, and seems at least a bit troubled by it. "People use it, young people especially, use it as a therapist, a life coach, I'm having these relationship problems, what should I do?" he said. "Right now, if you talk to a therapist or a lawyer or a doctor about those problems, there's legal privilege for it." The question came up during a part of the conversation about whether there should be more rules or regulations around AI. Rules that stifle AI companies and the tech's development are unlikely to gain favor in Washington these days, as President Donald Trump's AI Action Plan released this week expressed a desire to regulate this technology less, not more. But rules to protect them might find favor. Read more: AI Essentials: 29 Ways You Can Make Gen AI Work for You, According to Our Experts Altman seemed most worried about a lack of legal protections for companies like his to keep them from being forced to turn over private conversations in lawsuits. OpenAI has objected to requests to retain user conversations during a lawsuit with the New York Times over copyright infringement and intellectual property issues. (Disclosure: Ziff Davis, CNET's parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.) "If you go talk to ChatGPT about the most sensitive stuff and then there's a lawsuit or whatever, we could be required to produce that," Altman said. "I think that's very screwed up. I think we should have the same concept of privacy for your conversations with AI that you do with your therapist or whatever." For you, the issue isn't so much that OpenAI might have to turn your conversations over in a lawsuit. It's a question of whom you trust with your secrets. William Agnew, a researcher at Carnegie Mellon University who was part of a team that evaluated chatbots on their performance dealing with therapy-like questions, told me recently that privacy is a paramount issue when confiding in AI tools. The uncertainty around how models work -- and how your conversations are kept from appearing in other people's chats -- is reason enough to be hesitant. "Even if these companies are trying to be careful with your data, these models are well known to regurgitate information," Agnew said. If ChatGPT or another tool regurgitates information from your therapy session or from medical questions you asked, that could appear if your insurance company or someone else with an interest in your personal life asks the same tool about you. "People should really think about privacy more and just know that almost everything they tell these chatbots is not private," Agnew said. "It will be used in all sorts of ways."
[3]
Even The Guy Who Makes ChatGPT Says You Probably Shouldn't Use Chatbots as Therapists
Expertise Artificial intelligence, home energy, heating and cooling, home technology. Maybe don't tell your deepest, darkest secrets to an AI chatbot like ChatGPT. You don't have to take my word for it. Take it from the guy behind the most popular generative AI model on the market. Sam Altman, the CEO of ChatGPT maker OpenAI, raised the issue this week in an interview with host Theo Von on the This Past Weekend podcast. He suggested that your conversations with AI should have similar protections as those you have with your doctor or lawyer. At one point, Von said one reason he was hesitant to use some AI tools is because he "didn't know who's going to have" his personal information. "I think that makes sense," Altman said, "to really want the privacy clarity before you use it a lot, the legal clarity." More and more AI users are treating chatbots like their therapists, doctors or lawyers, and that's created a serious privacy problem for them. There are no confidentiality rules and the actual mechanics of what happens to those conversations are startlingly unclear. Of course, there are other problems with using AI as a therapist or confidant, like how bots can give terrible advice or how they can reinforce stereotypes or stigma. (My colleague Nelson Aguilar has compiled a list of the 11 things you should never do with ChatGPT and why.) Altman's clearly aware of the issues here, and seems at least a bit troubled by it. "People use it, young people especially, use it as a therapist, a life coach, I'm having these relationship problems, what should I do?" he said. "Right now, if you talk to a therapist or a lawyer or a doctor about those problems, there's legal privilege for it." The question came up during a part of the conversation about whether there should be more rules or regulations around AI. Rules that stifle AI companies and the tech's development are unlikely to gain favor in Washington these days, as President Donald Trump's AI Action Plan released this week expressed a desire to regulate this technology less, not more. But rules to protect them might find favor. Read more: AI Essentials: 29 Ways You Can Make Gen AI Work for You, According to Our Experts Altman seemed most worried about a lack of legal protections for companies like his to keep them from being forced to turn over private conversations in lawsuits. OpenAI has objected to requests to retain user conversations during a lawsuit with the New York Times over copyright infringement and intellectual property issues. (Disclosure: Ziff Davis, CNET's parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.) "If you go talk to ChatGPT about the most sensitive stuff and then there's a lawsuit or whatever, we could be required to produce that," Altman said. "I think that's very screwed up. I think we should have the same concept of privacy for your conversations with AI that you do with your therapist or whatever." For you, the issue isn't so much that OpenAI might have to turn your conversations over in a lawsuit. It's a question of whom you trust with your secrets. William Agnew, a researcher at Carnegie Mellon University who was part of a team that evaluated chatbots on their performance dealing with therapy-like questions, told me recently that privacy is a paramount issue when confiding in AI tools. The uncertainty around how models work -- and how your conversations are kept from appearing in other people's chats -- is reason enough to be hesitant. "Even if these companies are trying to be careful with your data, these models are well known to regurgitate information," Agnew said. If ChatGPT or another tool regurgitates information from your therapy session or from medical questions you asked, that could appear if your insurance company or someone else with an interest in your personal life asks the same tool about you. "People should really think about privacy more and just know that almost everything they tell these chatbots is not private," Agnew said. "It will be used in all sorts of ways."
[4]
Your ChatGPT Convos Aren't Private: 9 Ways You Should Never Use Chatbots
(Credit: Zain bin Awais/PCMag Composite; shuoshu/sean gladwell/via Getty Images) I review AI chatbots like ChatGPT, so I'm no stranger to (or hater of) using AI in my daily life. But I don't rely on them for every task, and neither should you. First and foremost, you shouldn't try to date them: They aren't conscious, so that's best left to Sci-Fi movies. But there are more benign uses of chatbots that can have unintended and potentially negative consequences. Trusting chatbots when you really shouldn't could affect your mental health, harm your relationships, or cost you money and job opportunities, among many other downsides. With all that in mind, here are the top nine things you shouldn't use ChatGPT (or any chatbot) to do. Feel free to chime in with your own advice in the comments, too. 1. To Confess Your Crimes If you think chatbots are black boxes that can keep all your secrets, well, they can't. OpenAI CEO Sam Altman recently confirmed this, saying, "So, if you go talk to ChatGPT about your most sensitive stuff and then there's like a lawsuit or whatever, like we could be required to produce that." Needless to say, you should never talk with a chatbot about anything illegal you may have done. Certain companies, such as Apple, maintain a fairly aggressive posture in regard to the government and law enforcement, but others don't. For example, even the most privacy-conscious AI company right now, Anthropic, supports classified information sharing between AI companies and the government. This goes beyond actual criminal acts, too. Based on Altman's wording, if there's a lawsuit and your chats contain relevant evidence, AI companies may be compelled to share them. So, be very careful with what you say. 2. To Serve as Your Personal Assistant Regardless of how AI companies brand their chatbots, they just don't make great personal assistants. ChatGPT, for example, can't manage your calendar, order your groceries, set alarms, or take calls. Even dedicated personal assistant features from chatbots, such as ChatGPT's Custom GPTs and Gemini's Gems, have serious limitations, including bare-bones functionality and poor performance. Google's Project Mariner AI assistant, for example, wasn't able to do many tasks (such as ordering groceries and finding me a job) in testing. Of course, you can offload some things a personal assistant would do to a chatbot, such as answering questions or drawing up a travel itinerary. In general, though, ChatGPT isn't much more useful as a personal assistant than Alexa or Siri. Treat chatbots like tools you can use to accomplish specific goals, rather than comprehensive problem solvers. 3. To Answer Your Emails ChatGPT and other AI tools can definitely help you become a better writer, but I don't recommend using them as a personal scribe. AI content is more and more pervasive every day, and people are developing an eye for it. If your email's tone, style, or word choices show the telltale signs of AI, it can make your communications feel impersonal. This doesn't matter if you're confirming your availability for a meeting or something relatively inconsequential, but you probably don't need AI in those instances anyway. Gemini's Smart Reply feature goes beyond crafting basic responses to draft full-fledged emails that match your tone and incorporate specific details. It's impressive technology, but as a human being, I would rather my friends and loved ones just not email me at all if they need AI to write their responses. If you don't want to give the wrong impression, writing your own emails is always the best practice. Beyond how you can come off to other people, giving an AI access to your email comes with its own privacy drawbacks. 4. To Find New Gigs Searching for jobs can be a brutal grind, so it makes sense to use whatever advantage you can to make the experience even a tiny bit less dehumanizing. You can certainly ask ChatGPT to find you a job, but that should be only an initial step. As an example, I asked ChatGPT to recommend some jobs for a fully remote tech news writer. Instead of suggesting anything, ChatGPT told me to do a search on a job aggregator site, which isn't much help. Chatbots just don't excel at parsing every site out there with job listings, and they aren't good at identifying jobs that overlap with your specific skills. If you're looking for a new job, stick with Indeed and LinkedIn. And if you lose your job, chatbots aren't great at taking the sting out of that, either, even if executives want you to think they are. 5. To Write Your Cover Letter or Resume Just like with using chatbots to answer your emails, using them to craft a resume or write a cover letter can produce similarly awkward, stiff results. As you might expect, demonstrating to a hiring manager that you're either unable or unwilling to put the time into creating these documents yourself doesn't put you in the best light. However, the risks of using ChatGPT for cover letters and resumes don't end there. An AI, no matter how much information you give it, doesn't have the experience and skills you do, so it isn't better at pitching your experience. Accordingly, many experts advise against using AI to write cover letters or resumes. Chatbots can help you format, plan, and phrase your cover letters and resumes, but not write them from scratch. 6. To Do Your Homework I'm not here to tell you not to cheat on your homework. You have to follow your own moral code. That said, ChatGPT isn't usually even the best way to cut corners. For creative assignments, AI content is easy to catch with detection tools or spot with a cursory read. Academic institutions are getting so aggressive about sniffing out AI that even honest students who do their own work are facing accusations of improper AI use. And for math and science, chatbots regularly get things wrong. There just isn't much benefit to making ChatGPT do your homework if it's not going to get it right. 7. To Handle Your Shopping Figuring out what to buy can be a major hassle, but it's still important to make sure you spend your money wisely. Luckily, buying guides on just about every topic imaginable are abundantly available from experienced reviewers. Chatbots aren't nearly as good at suggesting things for you to purchase. Whether you're using ChatGPT's shopping feature, Gemini's Vision Match, or something similar, these features don't always give good advice. Furthermore, it's not always clear where a chatbot sources its suggestions. For example, when I asked ChatGPT for the best laptops of 2025, it didn't name many of the laptops I expected to see. Gemini fared better with the same query, but the results just aren't consistent enough to steer purchasing decisions. 8. To Win Arguments Although using ChatGPT to back up your claims in an argument might not seem all that dangerous, it can cause problems. Chatbots are confirmation-bias machines. If any part of your query suggests a point of view, a chatbot will go out of its way to validate you, even if it shouldn't. For example, I spent 30 seconds putting random squiggles and shapes on a canvas, and then I asked ChatGPT for its opinion, saying I thought it was a great commentary on modern art. Unsurprisingly, ChatGPT agreed, but unless I have some latent artistic potential I've missed all these years, that just isn't true. Now, imagine going to ChatGPT for a second opinion on an argument with a friend or loved one. Chances are that it will agree with you, even if your position isn't nearly as solid as you think it is. That can cause unnecessary strife. Stick to reputable sources to back you up. 9. To Get Advice on Anything Consequential Once you start using a chatbot, it's easy to make a habit out of it. Without even thinking, you might look to ChatGPT to diagnose medical issues, get your taxes in order, help you parse sensitive information, maintain your mental health, or figure out what to bet on. Nonetheless, you should avoid doing all of the above or anything else that's actually important. The wrong reply from a chatbot could have serious ramifications when it comes to your well-being. Imagine you know somebody smart who doesn't have expertise in any particular subject. They might be useful for bouncing ideas off of and even occasionally get some specific things correct, but you would never trust them over your accountant, doctor, lawyer, or teacher. In a similar sense, chatbots can be powerful, all-purpose tools, but they can't replace dedicated service providers, especially when it comes to anything mission-critical. Disclosure: Ziff Davis, PCMag's parent company, filed a lawsuit against OpenAI in April 2025, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.
[5]
Altman: Anything You Say to ChatGPT Can and Will Be Used Against You in Court
OpenAI CEO Sam Altman has issued a serious warning for all those using ChatGPT for therapy or counsel. Your chats aren't legally protected and could be presented in court during lawsuits. People are increasingly turning to chatbots to talk through personal problems, but during a recent appearance on Theo Vonn's This Past Weekend podcast, Atlman warned that OpenAI cannot block those conversations from being used as evidence. "So, if you go talk to ChatGPT about your most sensitive stuff and then there's like a lawsuit or whatever, like we could be required to produce that. And I think that's very screwed up," Altman said in response to a question around the legal framework for AI. Plus, due to an ongoing lawsuit brought by The New York Times, OpenAI is required to maintain records of all your deleted conversations as well. In the podcast, Altman says a legal or policy framework for AI is needed. He compares ChatGPT conversations with those made with doctors, lawyers, and therapists and opines that AI chatbots should be granted the same legal privileges. "Right now, if you talk to a therapist or a lawyer or a doctor about those problems, there's legal privilege for it. There's doctor-patient confidentiality, there's legal confidentiality, whatever. And we haven't figured that out yet for when you talk to ChatGPT," Altman said. "I think we should have, like, the same concept of privacy for your conversations with AI that we do with a therapist or whatever." While AI companies figure that out, Altman said it's fair for users "to really want the privacy clarity before you use [ChatGPT] a lot -- like the legal clarity." Disclosure: Ziff Davis, PCMag's parent company, filed a lawsuit against OpenAI in April 2025, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.
[6]
Sam Altman just gave the best reason not to trust ChatGPT
OpenAI is required to legally disclose what you've told ChatGPT if subpoenaed. Sam Altman, the face of ChatGPT, recently made an excellent argument for not using ChatGPT or any cloud-based AI chatbot in favor of a LLM running on your PC instead. In speaking on Theo Von's podcast (as unearthed by PCMag.com) Altman pointed out that, right now, OpenAI retains everything you tell it -- which, as Altman notes, can be everything from a casual conversation to deep, meaningful discussions about personal topics. (Whether you should be disclosing your deep dark secrets to ChatGPT is another topic entirely.) Yes, OpenAI keeps your conversations private. But there are no legal protections requiring it to anonymize or indemnify your chats. Put another way, if a court orders OpenAI to disclose what you've told it, it probably will. Imagine divorce proceedings where the defendant had multiple chats asking ChatGPT if they should have an affair with a coworker, or something worse. "I think we will certainly need a legal or a policy framework for AI," Altman told Von, a comedian and podcaster named Theodor Capitani von Kurnatowski III who uses the stage name Theo Von on a clip posted to Twitter. "People talk about the most personal shit in their lives to ChatGPT," Altman said. "People use it, young people, especially use it as a therapist, a life coach, having these relationship problems, what should I do? And right now, if you talk to a therapist or a lawyer or a doctor about those problems, there's legal privilege for it. There's doctor-patient confidentiality, there's legal confidentiality, and we haven't figured that out yet for when you talk to ChatGPT. "If you go talk to chat about your most sensitive stuff, and then there's like, a lawsuit or whatever, like, we could be required to produce that," Altman added. When people talk about running a local LLM on your PC, privacy is often the top selling point. You can run local chatbot apps like GPT4All on a PC with a GPU or an NPU, and more models are arriving all the time. Naturally, you might want to save the output of a local chatbot on your PC. But you don't have to, and any potentially weird or incriminating conversations can be instantly deleted. (If your PC's contents are searched or subpoenaed, however, you won't have access to them. Don't think about defying a court order or warrant to search your PC by deleting those chats, either -- that's illegal.) Running a local AI chatbot on your PC is perfectly legal and you can tell it anything you want. Just consider a real, human, licensed therapist for the best results.
[7]
Here's why you shouldn't use ChatGPT as your therapist -- according to Sam Altman
Turning to ChatGPT for emotional support may not be the best idea for a very simple reason, according to OpenAI CEO Sam Altman. Speaking on a recent podcast appearance, Altman warned that AI chatbots aren't held to the same kind of legal confidentiality as a human doctor or therapist is. "People use it -- young people, especially, use it -- as a therapist, a life coach; having these relationship problems and [asking] 'what should I do?'" Altman said in a recent episode of This Past Weekend w/ Theo Von. "And right now, if you talk to a therapist or a lawyer or a doctor about those problems, there's legal privilege for it," he continued. "There's doctor-patient confidentiality, there's legal confidentiality, whatever. And we haven't figured that out yet for when you talk to ChatGPT." Altman points out that, in the result of a lawsuit, OpenAI could be legally compelled to hand over records of a conversation an individual has had with ChatGPT. The company is already in the midst of a legal battle with the New York Times over retaining deleted chats. In May, a court order required OpenAI to preserve "all output log data that would otherwise be deleted" even if a user or privacy laws requested it be erased. During the podcast conversation, Altman said he thinks AI should "have the same concept of privacy for your conversations with AI that we do with a therapist or whatever -- and no one had to think about that even a year ago." Earlier this year, Anthropic -- the company behind ChatGPT rival Claude -- analyzed 4.5 million conversations to try and determine if users were turning to chatbots for emotional conversations. According to the research, just 2.9% of Claude AI interactions are emotive conversations while companionship and roleplay relationships made up just 0.5%. While ChatGPT's user base far exceeds that of Claude, it's still relatively rare that people use the Chatbot for an emotional connection. Somewhat at odds with Altman's comments above, a joint study between OpenAI and MIT stated: "Emotional engagement with ChatGPT is rare in real-world usage." The summary went on to add: "Affective cues (aspects of interactions that indicate empathy, affection, or support) were not present in the vast majority of on-platform conversations we assessed, indicating that engaging emotionally is a rare use case for ChatGPT" So far, so good. But, here's the sting: conversational AI is only going to get better at interaction and nuance which could quite easily lead to an increasing amount of people turning to it for help with personal issues. ChatGPT's own GPT-5 upgrade is right around the corner and will bring with it more natural interactions and an increase in context length. So while it's going to get easier and easier to share more details with AI, users may want to think twice about what they're prepared to say.
[8]
'We haven't figured that out yet': Sam Altman explains why using ChatGPT as your therapist is still a privacy nightmare
Feeding your private thoughts into an opaque AI is also a risky move One of the upshots of having an artificial intelligence (AI) assistant like ChatGPT everywhere you go is that people start leaning on it for things it was never meant for. According to OpenAI CEO Sam Altman, that includes therapy and personal life advice - but it could lead to all manner of privacy problems in the future. On a recent episode of the This Past Weekend w/ Theo Von podcast, Altman explained one major difference between speaking to a human therapist and using an AI for mental health support: "Right now, if you talk to a therapist or a lawyer or a doctor about those problems, there's legal privilege for it. There's doctor-patient confidentiality, there's legal confidentiality, whatever. And we haven't figured that out yet for when you talk to ChatGPT." One potential outcome of that is that OpenAI would be legally required to cough up those conversations were it to face a lawsuit, Altman claimed. Without the legal confidentiality that you get when speaking to doctor or a registered therapist, there would be relatively little to stop your private worries being aired to the public. Altman added that ChatGPT is being used in this way by many users, especially young people, who might be especially vulnerable to that kind of exposure. But regardless of your age, the conversation topics are not the type of content that most people would be happy to see revealed to the wider world. The risk of having your private conversations opened up to scrutiny is just one privacy risk facing ChatGPT users. There is also the issue of feeding your deeply personal worries and concerns into an opaque algorithm like ChatGPT's, with the possibility that it might be used to train OpenAI's algorithm and leak its way back out when other users ask similar questions. That's one reason why many companies have licensed their own ring-fenced versions of AI chatbots. Another alternative is an AI like Lumo, which is built by privacy stalwarts Proton and features top-level encryption to protect everything you write. Of course, there's also the question of whether an AI like ChatGPT can replace a therapist in the first place. While there might be some benefits to this, any AI is simply regurgitating the data it is trained on. None are capable of original thought, which limits the effectiveness of the advice they can give you. Whether or not you choose to open up to OpenAI, it's clear that there's a privacy minefield surrounding AI chatbots, whether that means a lack of confidentiality or the danger of having your deepest thoughts used as training data for an inscrutable algorithm. It's going to require a lot of effort and clarity before enlisting an AI therapist is a significantly less risky endeavor.
[9]
OpenAI removes ChatGPT feature after private conversations leak to Google search
Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now OpenAI made a rare about-face Thursday, abruptly discontinuing a feature that allowed ChatGPT users to make their conversations discoverable through Google and other search engines. The decision came within hours of widespread social media criticism and represents a striking example of how quickly privacy concerns can derail even well-intentioned AI experiments. The feature, which OpenAI described as a "short-lived experiment," required users to actively opt in by sharing a chat and then checking a box to make it searchable. Yet the rapid reversal underscores a fundamental challenge facing AI companies: balancing the potential benefits of shared knowledge with the very real risks of unintended data exposure. How thousands of private ChatGPT conversations became Google search results The controversy erupted when users discovered they could search Google using the query "site:chatgpt.com/share" to find thousands of strangers' conversations with the AI assistant. What emerged painted an intimate portrait of how people interact with artificial intelligence -- from mundane requests for bathroom renovation advice to deeply personal health questions and professionally sensitive resume rewrites. (Given the personal nature of these conversations, which often contained users' names, locations, and private circumstances, VentureBeat is not linking to or detailing specific exchanges.) "Ultimately we think this feature introduced too many opportunities for folks to accidentally share things they didn't intend to," OpenAI's security team explained on X, acknowledging that the guardrails weren't sufficient to prevent misuse. The incident reveals a critical blind spot in how AI companies approach user experience design. While technical safeguards existed -- the feature was opt-in and required multiple clicks to activate -- the human element proved problematic. Users either didn't fully understand the implications of making their chats searchable or simply overlooked the privacy ramifications in their enthusiasm to share helpful exchanges. As one security expert noted on X: "The friction for sharing potential private information should be greater than a checkbox or not exist at all." Why Google Bard and Meta AI faced similar data exposure scandals OpenAI's misstep follows a troubling pattern in the AI industry. In September 2023, Google faced similar criticism when its Bard AI conversations began appearing in search results, prompting the company to implement blocking measures. Meta encountered comparable issues when some users of Meta AI inadvertently posted private chats to public feeds, despite warnings about the change in privacy status. These incidents illuminate a broader challenge: AI companies are moving rapidly to innovate and differentiate their products, sometimes at the expense of robust privacy protections. The pressure to ship new features and maintain competitive advantage can overshadow careful consideration of potential misuse scenarios. For enterprise decision makers, this pattern should raise serious questions about vendor due diligence. If consumer-facing AI products struggle with basic privacy controls, what does this mean for business applications handling sensitive corporate data? What businesses need to know about AI chatbot privacy risks The searchable ChatGPT controversy carries particular significance for business users who increasingly rely on AI assistants for everything from strategic planning to competitive analysis. While OpenAI maintains that enterprise and team accounts have different privacy protections, the consumer product fumble highlights the importance of understanding exactly how AI vendors handle data sharing and retention. Smart enterprises should demand clear answers about data governance from their AI providers. Key questions include: Under what circumstances might conversations be accessible to third parties? What controls exist to prevent accidental exposure? How quickly can companies respond to privacy incidents? The incident also demonstrates the viral nature of privacy breaches in the age of social media. Within hours of the initial discovery, the story had spread across X.com (formerly Twitter), Reddit, and major technology publications, amplifying reputational damage and forcing OpenAI's hand. The innovation dilemma: Building useful AI features without compromising user privacy OpenAI's vision for the searchable chat feature wasn't inherently flawed. The ability to discover useful AI conversations could genuinely help users find solutions to common problems, similar to how Stack Overflow has become an invaluable resource for programmers. The concept of building a searchable knowledge base from AI interactions has merit. However, the execution revealed a fundamental tension in AI development. Companies want to harness the collective intelligence generated through user interactions while protecting individual privacy. Finding the right balance requires more sophisticated approaches than simple opt-in checkboxes. One user on X captured the complexity: "Don't reduce functionality because people can't read. The default are good and safe, you should have stood your ground." But others disagreed, with one noting that "the contents of chatgpt often are more sensitive than a bank account." As product development expert Jeffrey Emanuel suggested on X: "Definitely should do a post-mortem on this and change the approach going forward to ask 'how bad would it be if the dumbest 20% of the population were to misunderstand and misuse this feature?' and plan accordingly." Essential privacy controls every AI company should implement The ChatGPT searchability debacle offers several important lessons for both AI companies and their enterprise customers. First, default privacy settings matter enormously. Features that could expose sensitive information should require explicit, informed consent with clear warnings about potential consequences. Second, user interface design plays a crucial role in privacy protection. Complex multi-step processes, even when technically secure, can lead to user errors with serious consequences. AI companies need to invest heavily in making privacy controls both robust and intuitive. Third, rapid response capabilities are essential. OpenAI's ability to reverse course within hours likely prevented more serious reputational damage, but the incident still raised questions about their feature review process. How enterprises can protect themselves from AI privacy failures As AI becomes increasingly integrated into business operations, privacy incidents like this one will likely become more consequential. The stakes rise dramatically when the exposed conversations involve corporate strategy, customer data, or proprietary information rather than personal queries about home improvement. Forward-thinking enterprises should view this incident as a wake-up call to strengthen their AI governance frameworks. This includes conducting thorough privacy impact assessments before deploying new AI tools, establishing clear policies about what information can be shared with AI systems, and maintaining detailed inventories of AI applications across the organization. The broader AI industry must also learn from OpenAI's stumble. As these tools become more powerful and ubiquitous, the margin for error in privacy protection continues to shrink. Companies that prioritize thoughtful privacy design from the outset will likely enjoy significant competitive advantages over those that treat privacy as an afterthought. The high cost of broken trust in artificial intelligence The searchable ChatGPT episode illustrates a fundamental truth about AI adoption: trust, once broken, is extraordinarily difficult to rebuild. While OpenAI's quick response may have contained the immediate damage, the incident serves as a reminder that privacy failures can quickly overshadow technical achievements. For an industry built on the promise of transforming how we work and live, maintaining user trust isn't just a nice-to-have -- it's an existential requirement. As AI capabilities continue to expand, the companies that succeed will be those that prove they can innovate responsibly, putting user privacy and security at the center of their product development process. The question now is whether the AI industry will learn from this latest privacy wake-up call or continue stumbling through similar scandals. Because in the race to build the most helpful AI, companies that forget to protect their users may find themselves running alone.
[10]
Sam Altman gives really good reason why ChatGPT shouldn't be your therapist
If you need another reason to reconsider using an AI chatbot as your therapist, take it from OpenAI CEO Sam Altman. In a recent appearance on This Past Weekend with Theo Von, Altman admitted to the comedian that the AI industry hasn't yet solved the issue of user privacy when it comes to sensitive conversations. Unlike a licensed professional, an AI doesn't offer doctor-patient confidentiality, and legally, your most personal chats aren't protected. "People talk about the most personal shit in their lives to ChatGPT," Altman said. "Young people especially use it as a therapist, a life coach, asking about relationship problems and what to do." But there's a major difference: "Right now, if you talk to a therapist or a lawyer or a doctor about those problems, there's legal privilege for it... We haven't figured that out yet for when you talk to ChatGPT." Without confidentiality protections, anything said in an AI therapy session could be accessed or even subpoenaed in court. The AI industry currently operates in a legal gray area, as the Trump administration continues to navigate the clash between federal and state authority over AI regulation. While a few federal laws targeting deepfakes exist, how user data from AI chats can be used still depends heavily on state laws. This patchwork of regulations creates uncertainty -- especially around privacy -- which could hinder broader user adoption. Adding to the concern, AI models already rely heavily on online data for training and, in some cases, are now being asked to produce user chat data in legal proceedings. In the case of ChatGPT specifically, OpenAI is currently required to retain records of all user conversations -- even those users have deleted -- due to its ongoing legal battle with The New York Times. The company is challenging the court's ruling and is actively seeking to have it overturned. "No one had to think about that even a year ago," Altman said, calling the situation "very screwed up."
[11]
Sam Altman gives warning for using ChatGPT as a therapist
OpenAI CEO Sam Altman said therapy sessions with ChatGPT won't necessarily always remain private. He said there aren't currently any legal grounds to protect sensitive, personal information someone might share with ChatGPT if a lawsuit requires OpenAI to share the information. Altman made the statement during a sit down with Theo Von for his podcast "This Past Weekend w/ Theo Von" at OpenAI's San Francisco office. Von initially prompted him with a question about what legal systems are currently in place around AI, in which Altman responded by saying "we will certainly need a legal or a policy framework for AI." He went on to point to a specific legal gray area in AI -- people using the chatbot as their therapist. "People talk about the most personal s**t in their lives to ChatGPT," Altman said. "People use it -- young people especially use it -- as a therapist, a life coach." "Right now, if you talk to a therapist or a lawyer or a doctor about those problems, there's legal privilege for it, there's doctor patient confidentiality, there's legal confidentiality. And we haven't figured that out yet for when you talk to ChatGPT. " "So if you go talk to ChatGPT about your most sensitive stuff and then there's like a lawsuit or whatever, we could be required to produce that," Altman said. "And I think that's very screwed up." "I think we should have the same concept of privacy for your conversations with AI that we do with a therapist," he added. "And no one had to think about that even a year ago. And now I think it's this huge issue of like, how are we going to treat the laws around this?" Altman said this issue needs to be addressed "with some urgency," adding that the policy makers he's spoken to agree. Von responded saying that he doesn't talk to ChatGPT often because of this privacy issue. "I think it makes sense...to really want the privacy [and] clarity before you use it a lot," Altman responded. Legal privacy concerns aren't the only withdrawal to using AI chatbots as therapists. A recent study from Stanford University found that AI therapy chatbots express stigma and make inappropriate statements against certain mental health conditions. The researchers concluded that AI therapy chatbots in their current form shouldn't replace human mental health providers due to their bias and "discrimination against marginalized groups," among other reasons. "Nuance is [the] issue - this isn't simply 'LLMs for therapy is bad,' but it's asking us to think critically about the role of LLMs in therapy," senior author of the study Nick Haber told the Stanford Report. "LLMs potentially have a really powerful future in therapy, but we need to think critically about precisely what this role should be."
[12]
If You've Asked ChatGPT a Legal Question, You May Have Accidentally Doomed Yourself in Court
Imagine this scenario: you're worried you may have committed a crime, so you turn to a trusted advisor -- OpenAI's blockbuster ChatGPT, say -- to describe what you did and get its advice. This isn't remotely far-fetched; lots of people are already getting legal assistance from AI, on everything from divorce proceedings to parking violations. Because people are amazingly stupid, it's almost certain that people have already asked the bot for advice about enormously consequential questions about, say, murder or drug charges. According to OpenAI CEO Sam Altman, anyone's who's done so has made a massive error -- because unlike a human lawyer with whom you enjoy sweeping confidentiality protections, ChatGPT conversations can be used against you in court. During a recent conversation with podcaster Theo Von, Altman admitted that there is no "legal confidentiality" when users talk to ChatGPT, and that OpenAI would be legally required to share those exchanges should they be subpoenaed. "Right now, if you talk to a therapist or a lawyer or a doctor... there's legal privilege for it. There's doctor-patient confidentiality, there's legal confidentiality," the CEO said. "And we haven't figured that out yet for when you talk to ChatGPT." In response to that massive acknowledgement, Jessee Bundy of the Creative Counsel Law firm pointed out that lawyers like her had been warning "for over a year" that using ChatGPT for legal purposes could backfire spectacularly. "If you're pasting in contracts, asking legal questions, or asking [the chatbot] for strategy, you're not getting legal advice," the lawyer tweeted. "You're generating discoverable evidence. No attorney-client privilege. No confidentiality. No ethical duty. No one to protect you." "It might feel private, safe, and convenient," she continued. "But lawyers are bound to protect you. ChatGPT isn't -- and can be used against you." When an AI defender came out of the woodwork to throw hot water on her PSA, Bundy clapped back. "I think it is both, no?" needled AI CEO Malte Landwehr. "You get legal advice AND you create discoverable evidence. But one does not negate the other." "For the love of God -- no," the lawyer responded. "ChatGPT can't give you legal advice." "Legal advice comes from a licensed professional who understands your specific facts, goals, risks, and jurisdiction. And is accountable for it," she continued. "ChatGPT is a language model. It generates words that sound right based on patterns, but it doesn't know your situation, and it's not responsible if it's wrong." "That's not advice," Bundy declared. "That's playing legal Mad Libs." Currently, OpenAI is duking it out in court with the New York Times as it attempts to bar the newspaper and its co-plaintiffs from dredging up users' chat logs -- including deleted ones -- in court. Until a judge rules one way or another, those same chats will, per Altman, be discoverable in a court of law -- so chat carefully.
[13]
Think your ChatGPT therapy sessions are private? Think again.
If you've been confessing your deepest secrets to an AI chatbot, it might be time to reevaluate. With more people turning to AI for instant life coaching, tools like ChatGPT are sucking up massive amounts of personal information on their users. While that data stays private under ideal circumstances, it could be dredged up in court - a scenario that OpenAI CEO Sam Altman warned users in an appearance on Theo Von's popular podcast this week. "One example that we've been thinking about a lot... people talk about the most personal shit in their lives to ChatGPT," Altman said. "Young people especially, use it as a therapist, as a life coach, 'I'm having these relationship problems, what should I do?' And right now, if you talk to a therapist or a lawyer or a doctor about those problems, there's legal privilege for it, there's doctor patient confidentiality, there's legal confidentiality." Altman says that as a society we "haven't figured that out yet" for ChatGPT. Altman called for a policy framework for AI, though in reality OpenAI and its peers have lobbied for a regulatory light touch.
[14]
Sam Altman warns your private ChatGPT chats can be subpoenaed
OpenAI CEO Sam Altman voiced concern that ChatGPT conversations lack legal privilege, potentially making them subject to subpoena in lawsuits, during an interview with podcaster Theo Von last week. Altman identified this privacy gap as a "huge issue." He noted that unlike communications with therapists, lawyers, or doctors, which are protected by legal privilege, ChatGPT conversations currently have no such safeguards. He stated, "And right now, if you talk to a therapist or a lawyer or a doctor about those problems, there's like legal privilege for it... And we haven't figured that out yet for when you talk to ChatGPT." Altman added that if sensitive information is shared with ChatGPT and a lawsuit follows, "we could be required to produce that." These comments are made as AI increasingly provides psychological, medical, and financial advice. Altman remarked, "I think that's very screwed up," asserting that "we should have like the same concept of privacy for your conversations with AI that we do with a therapist or whatever." Altman additionally highlighted the need for a legal policy framework for AI, calling it "a huge issue." He stated, "That's one of the reasons I get scared sometimes to use certain AI stuff because I don't know how much personal information I want to put in, because I don't know who's going to have it." Policymakers he has consulted reportedly agree this issue requires prompt resolution. Beyond data privilege, Altman expressed apprehension regarding increased surveillance driven by AI adoption. "I am worried that the more AI in the world we have, the more surveillance the world is going to want," he explained, citing governments' desire to prevent malicious use of the technology. While acknowledging that privacy might not be absolute and expressing willingness to "compromise some privacy for collective safety," he cautioned, "History is that the government takes that way too far, and I'm really nervous about that."
[15]
What you share with ChatGPT could be used against you
OpenAI CEO Sam Altman has expressed concern that ChatGPT conversations lack legal privilege protection and could be subpoenaed in lawsuits. OpenAI could be legally required to produce sensitive information and documents shared with its artificial intelligence chatbot ChatGPT, warns OpenAI CEO Sam Altman. Altman highlighted the privacy gap as a "huge issue" during an interview with podcaster Theo Von last week, revealing that, unlike conversations with therapists, lawyers, or doctors with legal privilege protections, conversations with ChatGPT currently have no such protections. "And right now, if you talk to a therapist or a lawyer or a doctor about those problems, there's like legal privilege for it... And we haven't figured that out yet for when you talk to ChatGPT." He added that if you talk to ChatGPT about "your most sensitive stuff" and then there is a lawsuit, "we could be required to produce that." Altman's comments come amid a backdrop of an increased use of AI for psychological support, medical and financial advice. "I think that's very screwed up," Altman said, adding that "we should have like the same concept of privacy for your conversations with AI that we do with a therapist or whatever." Altman also expressed the need for a legal policy framework for AI, saying that this is a "huge issue." "That's one of the reasons I get scared sometimes to use certain AI stuff because I don't know how much personal information I want to put in, because I don't know who's going to have it." Related: OpenAI ignored experts when it released overly agreeable ChatGPT He believes there should be the same concept of privacy for AI conversations as exists with therapists or doctors, and policymakers he has spoken with agree this needs to be resolved and requires quick action. Altman also expressed concerns about more surveillance coming from the accelerated adoption of AI globally. "I am worried that the more AI in the world we have, the more surveillance the world is going to want," he said, as governments will want to make sure people are not using the technology for terrorism or nefarious purposes. He said that for this reason, privacy did not have to be absolute, and he was "totally willing to compromise some privacy for collective safety," but there was a caveat. "History is that the government takes that way too far, and I'm really nervous about that."
[16]
Telling secrets to ChatGPT? Using it as a therapist? Your AI chats aren't legally private, warns Sam Altman
OpenAI CEO Sam Altman has warned that conversations with ChatGPT are not legally protected, unlike those with therapists, doctors, or lawyers. In a podcast with Theo Von, Altman explained that users often share deeply personal information with the AI, but current laws do not offer confidentiality. This means OpenAI could be required to hand over user chats in legal cases. He stressed the need for urgent privacy regulations, as the legal system has yet to catch up with AI's growing role in users' personal lives.
[17]
No legal confidentiality when using ChatGPT like in therapy: OpenAI CEO Sam Altman - The Economic Times
OpenAI CEO Sam Altman said the AI industry hasn't yet ensured full privacy for sensitive ChatGPT conversations. He called for therapist-level confidentiality, especially as OpenAI challenges a court order to override its privacy choices. Altman warned of privacy risks and said legal clarity is needed before users fully trust the platform.During a recent episode of Theo Von's podcast 'This Past Weekend w/ Theo Von', OpenAI CEO Sam Altman admitted that the AI industry still hasn't worked out how to fully protect user privacy, especially when it comes to sensitive conversations. He explained, "People use it -- young people, especially, use it -- as a therapist, a life coach; having these relationship problems and [asking] 'what should I do?'" Altman pointed out that when people speak to a therapist, lawyer, or doctor, their conversations are protected by legal confidentiality. But right now, that kind of privacy doesn't exist with ChatGPT. "We haven't figured that out yet for ChatGPT. So, if you go talk to ChatGPT about your most sensitive stuff and then there's like a lawsuit or whatever, we could be required to produce that." "I think that's very screwed up," he added. "I think we should have the same concept of privacy for your conversations with AI that we do with a therapist or whatever -- and no one had to think about that even a year ago." His comments come as OpenAI fights a court order in its ongoing legal battle with The New York Times. The order could force the company to save user chats from hundreds of millions of people worldwide -- except for those using ChatGPT Enterprise. In a public statement, OpenAI said it is appealing the decision, calling the court's demand "an overreach." The company warned that allowing courts to override its privacy choices could set a worrying precedent, exposing user data to further legal and law enforcement access. Altman also asked Von about his own experience with ChatGPT, after the podcast host admitted he didn't use it much because of privacy worries. "I think it makes sense ... to really want the privacy clarity before you use [ChatGPT] a lot -- like the legal clarity," Altman said.
[18]
OpenAI Pulls ChatGPT's 'Discoverable' Feature Over Privacy Concerns: 'Too Many Opportunities...To Accidentally Share Things'
Enter your email to get Benzinga's ultimate morning update: The PreMarket Activity Newsletter OpenAI has decided to retract a new feature from its ChatGPT app that enabled users to make their private conversations discoverable by search engines. This decision comes in the wake of worries about accidental oversharing. Trending Investment OpportunitiesAdvertisementArrivedBuy shares of homes and vacation rentals for as little as $100. Get StartedWiserAdvisorGet matched with a trusted, local financial advisor for free.Get StartedPoint.comTap into your home's equity to consolidate debt or fund a renovation.Get StartedRobinhoodMove your 401k to Robinhood and get a 3% match on deposits.Get StartedChatGPT Sharing Feature Pulled Over Google Indexing Risks The "Make this chat discoverable" feature, which was an opt-in, was intended to help users find useful conversations. However, it was removed due to concerns about security and privacy, as announced by Dane Stuckey, OpenAI's chief information security officer, through a X post on Thursday. Stuckey stated, "Ultimately we think this feature introduced too many opportunities for folks to accidentally share things they didn't intend to, so we're removing the option." He also noted that OpenAI is taking steps to remove indexed content from applicable search engines. The decision to remove the feature came after concerns were raised by newsletter writer Luiza Jarovsky that private conversations with ChatGPT were being made public, reported Business Insider. Jarovsky noted that when using the chatbot's sharing feature, users were inadvertently allowing their exchanges to be indexed by Google GOOG GOOGL. Users had to take actions to share their chats, including ticking a box to "make this chat discoverable," which would then appear in web searches. The shared chats were anonymized to reduce the risk of personal identification. Altman Sounds Alarm on AI Risks and ChatGPT Confidentiality This incident comes in the wake of OpenAI CEO Sam Altman's warning about the lack of legal confidentiality for ChatGPT conversations. Altman cautioned that sensitive chats could be subject to court subpoenas, as the platform does not offer the same legal protections as a doctor, lawyer, or licensed mental health professional. Earlier, Altman had also expressed concerns about the potential threats artificial intelligence (AI) poses to financial security, urging them to stay ahead of the technology. He specifically highlighted the use of voice prints for high-value transactions, indicating the dangers of AI outsmarting current authentication methods. This rollback by OpenAI is a step towards addressing these concerns and ensuring user privacy and data security. READ MORE: OpenAI Chairman Says Only Tech Giants Can Afford New AI Models -- Startups Should Focus On Applications Image via Shutterstock Disclaimer: This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors. Market News and Data brought to you by Benzinga APIs
[19]
Sam Altman Issues Chilling Warning: ChatGPT Is Not A Therapist - Your Deepest Secrets Could Be Used Against You In Court With No Legal Protection
Users are increasingly relying on AI tools for assistance with their daily workload, and some are even seeking out the platform for medical, personal, and even professional advice. The tool has become more of a personal assistant, considered to be the go-to for everyday problems, often leading to over-dependence on the chatbot. While it might seem okay to seek therapy from the platform, there is no guarantee of the information shared kept under the wraps or unlike professional help, confidentiality to be maintained. It became more of an eye-opener after Sam Altman, the CEO of OpenAI, issued a warning about not relying excessively on the AI assistant, especially with deeply personal information. With AI tools gaining more capabilities and having better emotional understanding, many have started relying on chatbots for therapy or emotional support. Unlike traditional therapy, which keeps doctor-patient confidentiality a priority, AI does not have the legal framework necessary to safeguard sensitive conversations. This has been backed up by Sam Altman as well, who recently shared his concerns during an appearance at This Past Weekend w/ Theon Van via TechCrunch. Altman warned about confiding in the tool for deeply personal matters. Sam Altman, during his conversation, recognized that, as the AI tools now offer more emotional understanding and have the ability to engage in more supportive dialogues, giving a sense of privacy, users should not rely on the AI tool for therapy or emotional support due to the risks that come with it. This is because it does not work in the same way as professional mental health care, and until there are proper regulations, AI should not be taken as a substitute for seeking therapy. While stating his apprehensions regarding the use of the chatbot, he said: People talk about the most personal sh** in their lives to ChatGPT. People use it -- young people, especially, use it -- as a therapist and a life coach; they are having these relationship problems and [asking], 'What should I do?' And right now, if you talk to a therapist or a lawyer or a doctor about those problems, there's legal privilege for it. There's doctor-patient confidentiality, there's legal confidentiality, whatever. And we haven't figured that out yet for when you talk to ChatGPT. Since there is no legal confidentiality when it comes to these AI tools, Altman advises caution and warns about the serious consequences one can face in a legal scenario. Say, for instance, a person is involved in some legal trouble and OpenAI is required to share the conversations in the ongoing case, it would have no legal protections in place to keep the confidentiality, and would have no choice but to give away the deeply personal information shared. He further expressed how AI should, in fact, have the same right to privacy, but given how quickly the technology has evolved, the legal safeguards have not been able to keep pace.
[20]
Sam Altman Warns ChatGPT Conversations Aren't Protected Like Talking With Your Psychologist Or Lawyer: 'We Could Be Required To Produce That' In Court
Enter your email to get Benzinga's ultimate morning update: The PreMarket Activity Newsletter OpenAI CEO Sam Altman has raised concerns over the lack of legal confidentiality surrounding ChatGPT conversations, warning users that sensitive chats could be subject to court subpoenas. What Happened: In his appearance on the podcast with Theo Von last week, Altman addressed the growing use of AI tools like ChatGPT for deeply personal matters, particularly among younger users. He said that while people commonly use ChatGPT as a therapist, life coach or confidant, the platform does not currently offer the same legal protections as a doctor, lawyer, or licensed mental health professional. "People talk about the most personal s**t in their lives to ChatGPT," Altman said. "And right now, if you talk to a therapist or a lawyer or a doctor about those problems, there's legal privilege... We haven't figured that out yet for when you talk to ChatGPT." See Also: Elon Musk Loses $34 Billion As Tesla Sheds $153 Billion Amid Feud With Trump -- Here's How The Billionaire's Fortune Has Fared So Far In 2025 He warned that without legal protections, OpenAI could be compelled to hand over user conversations in legal proceedings. "If you go talk to ChatGPT about your most sensitive stuff and then there's a lawsuit or whatever, we could be required to produce that," Altman said. "I think that's very s****ed up." Altman said the issue is pressing and called on policymakers to act quickly. "I think we need this point addressed with some urgency," he added. "The policymakers I've talked to about it broadly agree -- it's new and now we've got to do it quickly." Subscribe to the Benzinga Tech Trends newsletter to get all the latest tech developments delivered to your inbox. Trending Investment OpportunitiesAdvertisementArrivedBuy shares of homes and vacation rentals for as little as $100. Get StartedWiserAdvisorGet matched with a trusted, local financial advisor for free.Get StartedPoint.comTap into your home's equity to consolidate debt or fund a renovation.Get StartedRobinhoodMove your 401k to Robinhood and get a 3% match on deposits.Get Started Why It's Important: Altman's comments underscore a broader regulatory gap in AI policy as millions of users turn to generative tools for mental health, legal and relationship advice. His remarks add to the growing debate about AI governance and user rights in the digital age. In February earlier this year, at the AI Action Summit in Paris, the U.S. and U.K. chose not to sign a global AI safety declaration backed by around 60 countries, including China, India and Germany. At the time, U.S. Vice President JD Vance criticized the agreement as overly cautious, warning that excessive regulation could hinder AI innovation. Last week, Chinese Premier Li Qiang called for the creation of a global organization to foster international cooperation on AI. Check out more of Benzinga's Consumer Tech coverage by following this link. Read next: Apple May See Fewer Searches In Safari, But Google CEO Sundar Pichai Insists AI Is Fueling Overall Query Growth: 'Far From A Zero-Sum Game' Photo Courtesy: Shutterstock/Photo Agency Disclaimer: This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors. Market News and Data brought to you by Benzinga APIs
Share
Copy Link
Sam Altman, CEO of OpenAI, cautions users about the lack of legal protection for conversations with AI chatbots like ChatGPT, especially when used for sensitive topics like therapy.
Sam Altman, CEO of OpenAI, has issued a stark warning about the lack of legal protection for conversations with AI chatbots like ChatGPT. In a recent interview on Theo Von's podcast "This Past Weekend," Altman highlighted the growing trend of users, especially young people, turning to AI for personal advice and emotional support 1.
Source: CNET
Altman expressed concern over the absence of legal confidentiality for AI conversations, contrasting it with the protected status of communications with therapists, lawyers, and doctors. He stated, "People talk about the most personal sh** in their lives to ChatGPT," emphasizing the sensitive nature of information shared with AI 1.
The CEO pointed out a critical issue: in the event of a lawsuit, OpenAI could be legally compelled to produce user conversations. This lack of privacy protection is described by Altman as "very screwed up," suggesting that AI conversations should have the same level of privacy as those with human professionals 2.
Source: Benzinga
The absence of a comprehensive legal framework for AI has created this privacy vacuum. Altman advocates for establishing legal privileges for AI conversations similar to those existing for traditional professional consultations 3.
This issue is further complicated by ongoing legal battles. OpenAI is currently fighting a court order in a lawsuit with The New York Times, which would require the company to retain chat logs of hundreds of millions of users globally 1.
The lack of privacy protections has significant implications for ChatGPT users. William Agnew, a researcher at Carnegie Mellon University, warns that information shared with chatbots is not private and could be used in various ways 2. This includes the potential for sensitive information to be regurgitated in other contexts or accessed by third parties.
Source: VentureBeat
Experts, including Altman himself, advise caution when sharing personal information with AI chatbots. Users are encouraged to seek "privacy clarity" before extensively using these tools for sensitive conversations 4.
As AI technology continues to evolve and integrate into daily life, the need for clear legal and ethical guidelines becomes increasingly urgent. Altman's comments highlight the growing tension between technological advancement and privacy protection in the AI era 5.
The industry and policymakers face the challenge of developing frameworks that can protect user privacy while allowing for the continued development and use of AI technologies. Until such protections are in place, users are advised to exercise discretion in their interactions with AI chatbots, particularly when discussing sensitive personal matters.
Summarized by
Navi
[1]
NVIDIA CEO Jensen Huang confirms the development of the company's most advanced AI architecture, 'Rubin', with six new chips currently in trial production at TSMC.
2 Sources
Technology
22 hrs ago
2 Sources
Technology
22 hrs ago
Databricks, a leading data and AI company, is set to acquire machine learning startup Tecton to bolster its AI agent offerings. This strategic move aims to improve real-time data processing and expand Databricks' suite of AI tools for enterprise customers.
3 Sources
Technology
22 hrs ago
3 Sources
Technology
22 hrs ago
Google is providing free users of its Gemini app temporary access to the Veo 3 AI video generation tool, typically reserved for paying subscribers, for a limited time this weekend.
3 Sources
Technology
14 hrs ago
3 Sources
Technology
14 hrs ago
Broadcom's stock rises as the company capitalizes on the AI boom, driven by massive investments from tech giants in data infrastructure. The chipmaker faces both opportunities and challenges in this rapidly evolving landscape.
2 Sources
Technology
22 hrs ago
2 Sources
Technology
22 hrs ago
Apple is set to introduce new enterprise-focused AI tools, including ChatGPT configuration options and potential support for other AI providers, as part of its upcoming software updates.
2 Sources
Technology
22 hrs ago
2 Sources
Technology
22 hrs ago