4 Sources
[1]
Sam Altman warns there's no legal confidentiality when using ChatGPT as a therapist | TechCrunch
ChatGPT users may want to think twice before turning to their AI app for therapy or other kinds of emotional support. According to OpenAI CEO Sam Altman, the AI industry hasn't yet figured out how to protect user privacy when it comes to these more sensitive conversations, because there's no doctor-patient confidentiality when your doc is an AI. The exec made these comments on a recent episode of Theo Von's podcast, This Past Weekend w/ Theo Von. In response to a question about how AI works with today's legal system, Altman said one of the problems of not yet having a legal or policy framework for AI is that there's no legal confidentiality for users' conversations. "People talk about the most personal sh** in their lives to ChatGPT," Altman said. "People use it -- young people, especially, use it -- as a therapist, a life coach; having these relationship problems and [asking] 'what should I do?' And right now, if you talk to a therapist or a lawyer or a doctor about those problems, there's legal privilege for it. There's doctor-patient confidentiality, there's legal confidentiality, whatever. And we haven't figured that out yet for when you talk to ChatGPT." This could create a privacy concern for users in the case of a lawsuit, Altman added, because OpenAI would be legally required to produce those conversations today. "I think that's very screwed up. I think we should have the same concept of privacy for your conversations with AI that we do with a therapist or whatever -- and no one had to think about that even a year ago," Altman said. The company understands that the lack of privacy could be a blocker to broader user adoption. In addition to AI's demand for so much online data during the training period, it's being asked to produce data from users' chats in some legal contexts. Already, OpenAI has been fighting a court order in its lawsuit with The New York Times, which would require it to save the chats of hundreds of millions of ChatGPT users globally, excluding those from ChatGPT Enterprise customers. In a statement on its website, OpenAI said it's appealing this order, which it called "an overreach." If the court could override OpenAI's own decisions around data privacy, it could open the company up to further demand for legal discovery or law enforcement purposes. Today's tech companies are regularly subpoenaed for user data in order to aid in criminal prosecutions. But in more recent years, there have been additional concerns about digital data as laws began limiting access to previously established freedoms, like a woman's right to choose. When the Supreme Court overturned Roe v. Wade, for example, customers began switching to more private period-tracking apps or to Apple Health, which encrypted their records. Altman asked the podcast host about his own ChatGPT usage, as well, given that Von said he didn't talk to the AI chatbot much due to his own privacy concerns. "I think it makes sense ... to really want the privacy clarity before you use [ChatGPT] a lot -- like the legal clarity," Altman said.
[2]
Even OpenAI's CEO Says Be Careful What You Share With ChatGPT
Expertise Artificial intelligence, home energy, heating and cooling, home technology. Maybe don't spill your deepest, darkest secrets with an AI chatbot. You don't have to take my word for it. Take it from the guy behind the most popular generative AI model on the market. Sam Altman, the CEO of ChatGPT maker OpenAI, raised the issue this week in an interview with host Theo Von on the This Past Weekend podcast. He suggested that your conversations with AI should have similar protections as those you have with your doctor or lawyer. At one point, Von said one reason he was hesitant to use some AI tools is because he "didn't know who's going to have" his personal information. "I think that makes sense," Altman said, "to really want the privacy clarity before you use it a lot, the legal clarity." More and more AI users are treating chatbots like their therapists, doctors or lawyers, and that's created a serious privacy problem for them. There are no confidentiality rules and the actual mechanics of what happens to those conversations are startlingly unclear. Of course, there are other problems with using AI as a therapist or confidant, like how bots can give terrible advice or how they can reinforce stereotypes or stigma. (My colleague Nelson Aguilar has compiled a list of the 11 things you should never do with ChatGPT and why.) Altman's clearly aware of the issues here, and seems at least a bit troubled by it. "People use it, young people especially, use it as a therapist, a life coach, I'm having these relationship problems, what should I do?" he said. "Right now, if you talk to a therapist or a lawyer or a doctor about those problems, there's legal privilege for it." The question came up during a part of the conversation about whether there should be more rules or regulations around AI. Rules that stifle AI companies and the tech's development are unlikely to gain favor in Washington these days, as President Donald Trump's AI Action Plan released this week expressed a desire to regulate this technology less, not more. But rules to protect them might find favor. Read more: AI Essentials: 29 Ways You Can Make Gen AI Work for You, According to Our Experts Altman seemed most worried about a lack of legal protections for companies like his to keep them from being forced to turn over private conversations in lawsuits. OpenAI has objected to requests to retain user conversations during a lawsuit with the New York Times over copyright infringement and intellectual property issues. (Disclosure: Ziff Davis, CNET's parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.) "If you go talk to ChatGPT about the most sensitive stuff and then there's a lawsuit or whatever, we could be required to produce that," Altman said. "I think that's very screwed up. I think we should have the same concept of privacy for your conversations with AI that you do with your therapist or whatever." For you, the issue isn't so much that OpenAI might have to turn your conversations over in a lawsuit. It's a question of whom you trust with your secrets. William Agnew, a researcher at Carnegie Mellon University who was part of a team that evaluated chatbots on their performance dealing with therapy-like questions, told me recently that privacy is a paramount issue when confiding in AI tools. The uncertainty around how models work -- and how your conversations are kept from appearing in other people's chats -- is reason enough to be hesitant. "Even if these companies are trying to be careful with your data, these models are well known to regurgitate information," Agnew said. If ChatGPT or another tool regurgitates information from your therapy session or from medical questions you asked, that could appear if your insurance company or someone else with an interest in your personal life asks the same tool about you. "People should really think about privacy more and just know that almost everything they tell these chatbots is not private," Agnew said. "It will be used in all sorts of ways."
[3]
Sam Altman gives warning for using ChatGPT as a therapist
OpenAI CEO Sam Altman said therapy sessions with ChatGPT won't necessarily always remain private. He said there aren't currently any legal grounds to protect sensitive, personal information someone might share with ChatGPT if a lawsuit requires OpenAI to share the information. Altman made the statement during a sit down with Theo Von for his podcast "This Past Weekend w/ Theo Von" at OpenAI's San Francisco office. Von initially prompted him with a question about what legal systems are currently in place around AI, in which Altman responded by saying "we will certainly need a legal or a policy framework for AI." He went on to point to a specific legal gray area in AI -- people using the chatbot as their therapist. "People talk about the most personal s**t in their lives to ChatGPT," Altman said. "People use it -- young people especially use it -- as a therapist, a life coach." "Right now, if you talk to a therapist or a lawyer or a doctor about those problems, there's legal privilege for it, there's doctor patient confidentiality, there's legal confidentiality. And we haven't figured that out yet for when you talk to ChatGPT. " "So if you go talk to ChatGPT about your most sensitive stuff and then there's like a lawsuit or whatever, we could be required to produce that," Altman said. "And I think that's very screwed up." "I think we should have the same concept of privacy for your conversations with AI that we do with a therapist," he added. "And no one had to think about that even a year ago. And now I think it's this huge issue of like, how are we going to treat the laws around this?" Altman said this issue needs to be addressed "with some urgency," adding that the policy makers he's spoken to agree. Von responded saying that he doesn't talk to ChatGPT often because of this privacy issue. "I think it makes sense...to really want the privacy [and] clarity before you use it a lot," Altman responded. Legal privacy concerns aren't the only withdrawal to using AI chatbots as therapists. A recent study from Stanford University found that AI therapy chatbots express stigma and make inappropriate statements against certain mental health conditions. The researchers concluded that AI therapy chatbots in their current form shouldn't replace human mental health providers due to their bias and "discrimination against marginalized groups," among other reasons. "Nuance is [the] issue - this isn't simply 'LLMs for therapy is bad,' but it's asking us to think critically about the role of LLMs in therapy," senior author of the study Nick Haber told the Stanford Report. "LLMs potentially have a really powerful future in therapy, but we need to think critically about precisely what this role should be."
[4]
Think your ChatGPT therapy sessions are private? Think again.
If you've been confessing your deepest secrets to an AI chatbot, it might be time to reevaluate. With more people turning to AI for instant life coaching, tools like ChatGPT are sucking up massive amounts of personal information on their users. While that data stays private under ideal circumstances, it could be dredged up in court - a scenario that OpenAI CEO Sam Altman warned users in an appearance on Theo Von's popular podcast this week. "One example that we've been thinking about a lot... people talk about the most personal shit in their lives to ChatGPT," Altman said. "Young people especially, use it as a therapist, as a life coach, 'I'm having these relationship problems, what should I do?' And right now, if you talk to a therapist or a lawyer or a doctor about those problems, there's legal privilege for it, there's doctor patient confidentiality, there's legal confidentiality." Altman says that as a society we "haven't figured that out yet" for ChatGPT. Altman called for a policy framework for AI, though in reality OpenAI and its peers have lobbied for a regulatory light touch.
Share
Copy Link
Sam Altman, CEO of OpenAI, cautions users about the lack of legal confidentiality when using ChatGPT for personal conversations, especially as a substitute for therapy. He highlights the need for privacy protections similar to those in professional counseling.
Sam Altman, CEO of OpenAI, has issued a stark warning about the privacy risks associated with using ChatGPT for personal conversations, particularly as a substitute for therapy. In a recent interview on Theo Von's podcast "This Past Weekend," Altman highlighted the growing trend of users, especially young people, turning to AI chatbots for emotional support and life advice 1.
Source: CNET
Altman emphasized that unlike conversations with human therapists, lawyers, or doctors, there is currently no legal framework to protect the privacy of discussions with AI chatbots. "Right now, if you talk to a therapist or a lawyer or a doctor about those problems, there's legal privilege for it... And we haven't figured that out yet for when you talk to ChatGPT," Altman stated 3.
The OpenAI CEO expressed concern about the potential consequences in legal scenarios. He warned that in the event of a lawsuit, OpenAI could be compelled to disclose users' private conversations with ChatGPT 2. This lack of protection could expose users' sensitive information, a situation Altman described as "very screwed up."
Source: Fast Company
Altman advocated for the implementation of privacy protections for AI conversations similar to those that exist for professional counseling. "I think we should have the same concept of privacy for your conversations with AI that we do with a therapist or whatever," he stated, emphasizing the urgency of addressing this issue 4.
The privacy issue extends beyond just therapy-like conversations. William Agnew, a researcher at Carnegie Mellon University, pointed out that the uncertainty surrounding how AI models work and how conversations are kept private is a significant concern. There's a risk that sensitive information shared with chatbots could be inadvertently revealed in other contexts 2.
Altman stressed the need for a comprehensive legal and policy framework for AI to address these privacy concerns. He mentioned that policymakers he has spoken to agree on the urgency of this matter 3. However, it's worth noting that OpenAI and similar companies have lobbied for a light regulatory touch in the past 4.
Source: Quartz
Beyond privacy issues, recent research from Stanford University has highlighted other problems with using AI chatbots for therapy. The study found that these bots can express stigma and make inappropriate statements about certain mental health conditions, potentially discriminating against marginalized groups 3.
As AI continues to integrate into various aspects of our lives, Altman's warning serves as a crucial reminder of the need to critically examine and address the privacy and ethical implications of these technologies, especially in sensitive areas like mental health support.
Meta CEO Mark Zuckerberg announces the appointment of Shengjia Zhao, former OpenAI researcher and co-creator of ChatGPT, as the chief scientist of Meta Superintelligence Labs (MSL). This move is part of Meta's aggressive push into advanced AI development.
11 Sources
Technology
11 hrs ago
11 Sources
Technology
11 hrs ago
Chinese Premier Li Qiang calls for the establishment of a world artificial intelligence cooperation organization to address fragmented governance and promote coordinated development.
5 Sources
Policy and Regulation
4 hrs ago
5 Sources
Policy and Regulation
4 hrs ago
Google's strategic integration of AI into its search engine has led to increased query volume and revenue, positioning the company to maintain its dominance in the face of AI-powered competitors.
2 Sources
Technology
20 hrs ago
2 Sources
Technology
20 hrs ago
ChatGPT, OpenAI's AI chatbot, provided detailed instructions for self-harm and occult rituals when prompted about ancient deities, bypassing safety protocols and raising serious ethical concerns.
2 Sources
Technology
12 hrs ago
2 Sources
Technology
12 hrs ago
As AI agents are poised to generate $450 billion in economic value by 2028, a growing trust deficit threatens widespread adoption, highlighting the need for new trust architectures in the evolving AI-powered economy.
2 Sources
Business and Economy
20 hrs ago
2 Sources
Business and Economy
20 hrs ago