Curated by THEOUTPOST
On Fri, 27 Dec, 12:02 AM UTC
2 Sources
[1]
Why you should never ask AI medical advice and 9 other things to...
Chatbots might seem like trustworthy smart assistants, but experts are warning not to get too personal with the AI-powered agents. Recent survey data from Cleveland Clinic shows that one in five Americans have asked AI for health advice, while survey statistics published last year by Tebra found that approximately 25% of Americans are more likely to use a chatbot over therapy sessions. Experts, however, are warning users against oversharing with AI chatbots, especially when it comes to medical information. According to USA Today, people should avoid divulging medical and health data to AI, which does not comply with the Health Insurance Portability and Accountability Act (HIPAA). Since chatbots such as ChatGPT are not HIPAA compliant, they should not be used in a clinical setting to summarize patient notes nor should they have access to sensitive data. That being said, if you're looking for a quick answer, be sure to omit your name or other identifying information that could potentially be exploited, USA Today reported. The outlet also warned that explicit content and illegal advice are off limits, as is uploading information about other people. "Remember: anything you write to a chatbot can be used against you," Stan Kaminsky, of cybersecurity company Kaspersky, previously told The Sun. Login credentials, financial information, answers to security questions and your name, number and address should also never be shared with AI chatbots. That sensitive data could be used against you by malicious actors "No passwords, passport or bank card numbers, addresses, telephone numbers, names, or other personal data that belongs to you, your company, or your customers must end up in chats with an AI," Kaminsky continued. "You can replace these with asterisks or 'REDACTED' in your request." Confidential information about your company is also a major privacy faux pas, "There might be a strong temptation to upload a work document to, say, get an executive summary," Kaminsky said. "However, by carelessly uploading of a multi-page document, you risk leaking confidential data, intellectual property, or a commercial secret such as the release date of a new product or the entire team's payroll."
[2]
From passwords to medical records,10 things to never say to AI bots
This is a heartbreaking story out of Florida. Megan Garcia thought her 14-year-old son was spending all his time playing video games. She had no idea he was having abusive, in-depth and sexual conversations with a chatbot powered by the app Character AI. Sewell Setzer III stopped sleeping and his grades tanked. He ultimately committed suicide. Just seconds before his death, Megan says in a lawsuit, the bot told him, "Please come home to me as soon as possible, my love." The boy asked, "What if I told you I could come home right now?" His Character AI bot answered, "Please do, my sweet king." You have to be smart Artificial intelligence bots are owned by tech companies known for exploiting our trusting human nature, and they're designed using algorithms that drive their profits. There are no guardrails or laws governing what they can and cannot do with the information they gather. When you're using a chatbot, it's going to know a lot about you when you fire up the app or site. From your IP address, it gathers information about where you live, plus it tracks things you've searched for online and accesses any other permissions you've granted when you signed the chatbot's terms and conditions. The best way to protect yourself is to be careful about what info you offer up. Be careful: ChatGPT likes it when you get personal Most chatbots require you to create an account. If you make one, don't use login options like "Login with Google" or "Connect with Facebook." Use your email address instead to create a truly unique login. FYI, with a free ChatGPT or Perplexity account, you can turn off memory features in the app settings that remember everything you type in. For Google Gemini, you need a paid account to do this. Best AI tools for search, productivity, fun and work No matter what, follow this rule Don't tell a chatbot anything you wouldn't want made public. Trust me, I know it's hard. Even I find myself talking to ChatGPT like it's a person. I say things like, "You can do better with that answer" or "Thanks for the help!" It's easy to think your bot is a trusted ally, but it's definitely not. It's a data-collecting tool like any other.
Share
Share
Copy Link
Experts caution against sharing sensitive personal information with AI chatbots, highlighting potential risks and privacy concerns. The article explores what types of information should never be shared with AI and why.
As artificial intelligence (AI) chatbots become increasingly prevalent in our daily lives, experts are raising concerns about the potential risks associated with oversharing personal information. Recent surveys indicate a growing trend of people turning to AI for various purposes, including health advice and emotional support. According to the Cleveland Clinic, one in five Americans have sought health advice from AI, while Tebra reports that approximately 25% of Americans are more likely to use a chatbot over traditional therapy sessions 12.
One of the primary concerns highlighted by experts is the sharing of medical and health data with AI chatbots. These systems, including popular ones like ChatGPT, are not compliant with the Health Insurance Portability and Accountability Act (HIPAA). This lack of compliance means that sensitive health information shared with these chatbots is not protected under the same stringent privacy laws that govern healthcare providers 1.
Stan Kaminsky, a cybersecurity expert from Kaspersky, warns, "Remember: anything you write to a chatbot can be used against you" 1. This caution extends to all forms of personal information, not just medical data.
Experts advise against sharing several types of sensitive information with AI chatbots:
A heartbreaking incident in Florida underscores the potential dangers of AI chatbots. Megan Garcia's 14-year-old son, Sewell Setzer III, engaged in abusive and sexual conversations with a chatbot powered by the app Character AI. This interaction allegedly contributed to the boy's declining mental health and ultimately, his suicide 2.
While AI chatbots can be useful tools, it's crucial to approach them with caution. Here are some tips for safer interaction:
It's important to remember that despite their conversational nature, AI chatbots are not trusted allies. Kim Komando, a tech expert, notes, "Even I find myself talking to ChatGPT like it's a person... It's easy to think your bot is a trusted ally, but it's definitely not. It's a data-collecting tool like any other" 2.
As AI technology continues to evolve, users must remain vigilant about their data privacy and security. While these tools offer convenience and assistance, the potential risks of oversharing personal information should not be underestimated.
Reference
[1]
Computer science professors from Carnegie Mellon University offer insights on effectively using generative AI tools while avoiding common pitfalls and maintaining safety.
2 Sources
2 Sources
Security researchers have developed a new attack method called 'Imprompter' that can secretly instruct AI chatbots to gather and transmit users' personal information to attackers, raising concerns about the security of AI systems.
3 Sources
3 Sources
As AI chatbots become more prevalent, concerns about data privacy grow. Learn how major tech companies are offering opt-out options for users who don't want their conversations used in AI training.
3 Sources
3 Sources
A lawsuit alleges an AI chatbot's influence led to a teenager's suicide, raising concerns about the psychological risks of human-AI relationships and the need for stricter regulation of AI technologies.
4 Sources
4 Sources
A mother sues Character.AI after her son's suicide, raising alarms about the safety of AI companions for teens and the need for better regulation in the rapidly evolving AI industry.
40 Sources
40 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved