2 Sources
[1]
Altman Warns That Your ChatGPT Conversations Can (and Will) Be Used Against You in Court
OpenAI CEO Sam Altman has issued a serious warning for all those using ChatGPT for therapy or counsel. Your chats aren't legally protected and could be presented in court during lawsuits. People are increasingly turning to chatbots to talk through personal problems, but during a recent appearance on Theo Vonn's This Past Weekend podcast, Atlman warned that OpenAI cannot block those conversations from being used as evidence. "So, if you go talk to ChatGPT about your most sensitive stuff and then there's like a lawsuit or whatever, like we could be required to produce that. And I think that's very screwed up," Altman said in response to a question around the legal framework for AI. Plus, due to an ongoing lawsuit brought by The New York Times, OpenAI is required to maintain records of all your deleted conversations as well. In the podcast, Altman says a legal or policy framework for AI is needed. He compares ChatGPT conversations with those made with doctors, lawyers, and therapists and opines that AI chatbots should be granted the same legal privileges. "Right now, if you talk to a therapist or a lawyer or a doctor about those problems, there's legal privilege for it. There's doctor-patient confidentiality, there's legal confidentiality, whatever. And we haven't figured that out yet for when you talk to ChatGPT," Altman said. "I think we should have, like, the same concept of privacy for your conversations with AI that we do with a therapist or whatever." While AI companies figure that out, Altman said it's fair for users "to really want the privacy clarity before you use [ChatGPT] a lot -- like the legal clarity." Disclosure: Ziff Davis, PCMag's parent company, filed a lawsuit against OpenAI in April 2025, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.
[2]
Be careful what you tell ChatGPT, lawyers warn
Gift 5 articles to anyone you choose each month when you subscribe. Lawyers have warned about the dangers of sharing sensitive personal information with generative AI tools because computer programs have no regard for privacy and are not bound by confidentiality laws. Sonia Haque-Vatcher, a risk advisory partner at Ashurst law firm who has expertise in AI and data, said tools such as ChatGPT responded in such a human-like way that they encouraged a false sense of safety and culture of oversharing.
Share
Copy Link
Sam Altman, CEO of OpenAI, cautions users about the lack of legal protection for ChatGPT conversations, which could be used as evidence in court. He advocates for AI chatbot conversations to have similar legal privileges as those with doctors, lawyers, and therapists.
Sam Altman, CEO of OpenAI, has issued a stark warning to users of ChatGPT, the company's popular AI chatbot. During an appearance on Theo Vonn's "This Past Weekend" podcast, Altman cautioned that conversations with ChatGPT are not legally protected and could potentially be used as evidence in court proceedings 1.
Source: PC Magazine
Altman emphasized the lack of legal framework surrounding AI conversations, stating, "So, if you go talk to ChatGPT about your most sensitive stuff and then there's like a lawsuit or whatever, like we could be required to produce that. And I think that's very screwed up" 1. This revelation comes at a time when an increasing number of people are turning to AI chatbots for personal counseling and problem-solving.
The OpenAI CEO drew parallels between ChatGPT conversations and those held with professionals such as doctors, lawyers, and therapists. He argued that AI chatbots should be granted similar legal privileges to ensure user privacy 1. "Right now, if you talk to a therapist or a lawyer or a doctor about those problems, there's legal privilege for it. There's doctor-patient confidentiality, there's legal confidentiality, whatever. And we haven't figured that out yet for when you talk to ChatGPT," Altman explained 1.
Adding to the complexity of the situation, OpenAI is currently embroiled in a lawsuit with The New York Times. As a result, the company is required to maintain records of all deleted conversations, further compromising user privacy 1. This legal battle underscores the urgent need for a comprehensive legal and policy framework governing AI interactions.
Source: Australian Financial Review
Legal experts have also weighed in on the potential dangers of sharing sensitive information with AI tools. Sonia Haque-Vatcher, a risk advisory partner at Ashurst law firm with expertise in AI and data, warned that the human-like responses of tools like ChatGPT could create a false sense of security, leading to oversharing of personal information 2.
Altman acknowledged the need for users to have clear legal and privacy guidelines before extensively using ChatGPT. He stated, "I think it's fair to really want the privacy clarity before you use [ChatGPT] a lot -- like the legal clarity" 1. This call for transparency highlights the growing concern over the intersection of AI technology and personal privacy in an evolving digital landscape.
The concerns raised by Altman and legal experts extend beyond OpenAI and ChatGPT, potentially affecting the entire AI industry. As AI tools become more integrated into daily life, the need for robust privacy protections and legal frameworks becomes increasingly critical. The outcome of these discussions could shape the future of AI interactions and user trust in these technologies.
Google introduces a series of AI agents and tools to revolutionize data engineering, data science, and analytics, promising to streamline workflows and boost productivity for enterprise data teams.
3 Sources
Technology
21 hrs ago
3 Sources
Technology
21 hrs ago
Qualcomm announces successful testing of OpenAI's gpt-oss-20b model on Snapdragon-powered devices, marking a significant step towards on-device AI processing.
2 Sources
Technology
21 hrs ago
2 Sources
Technology
21 hrs ago
Huawei is open-sourcing its CANN software toolkit for Ascend AI GPUs, aiming to compete with NVIDIA's CUDA and attract more developers to its ecosystem.
2 Sources
Technology
21 hrs ago
2 Sources
Technology
21 hrs ago
Anthropic's Claude AI model has demonstrated exceptional performance in hacking competitions, outranking human competitors and raising questions about the future of AI in cybersecurity.
2 Sources
Technology
13 hrs ago
2 Sources
Technology
13 hrs ago
The Productivity Commission's proposal for AI copyright exemptions in Australia has ignited a fierce debate between tech companies and creative industries, raising concerns about intellectual property rights and economic impact.
3 Sources
Policy and Regulation
13 hrs ago
3 Sources
Policy and Regulation
13 hrs ago