2 Sources
[1]
11 Things You Shouldn't Be Using ChatGPT For
ChatGPT and other AI chatbots have revolutionized the way we interact with the world. You can use them to help organize your life, plan your next trip, figure out groceries for the week and even consider your future career options. However, ChatGPT isn't infallible. While I'm a fan, I also know the limitations of ChatGPT, and you should too, whether you're a newbie or an expert. It's fun for trying new recipes, learning a foreign language or planning a vacation, but you don't want to give ChatGPT carte blanche in your life. It's not great at everything -- in fact, it can be downright sketchy at a lot of things. ChatGPT sometimes hallucinates information and passes it off as fact, and it may not always have up-to-date information. It's incredibly confident, even when it's straight up wrong. (The same can be said about other generative AI tools, too, of course.) That matters the higher the stakes get, like when taxes, medical bills, court dates or bank balances enter the chat. If you're unsure about when turning to ChatGPT might be risky, here are 11 scenarios when you should put down the AI and choose another option. Don't use ChatGPT for any of the following. (Disclosure: Ziff Davis, the parent company of CNET, in April filed a lawsuit against ChatGPT maker OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.) I've definitely fed ChatGPT my symptoms out of curiosity, but the answers that come back can read like your worst nightmare. As you pore over potential diagnoses, you could swing from dehydration and the flu to some type of cancer. I have a lump on my chest and entered that information into ChatGPT. Lo and behold, it told me I may have cancer. In fact, I have a lipoma, which is not cancerous and occurs in 1 in every 1,000 people. My licensed doctor told me that. I'm not saying there are no good uses of ChatGPT for health: It can help you draft questions for your next appointment, translate medical jargon and organize a symptom timeline so you can walk in better prepared. And that could help make doctor visits less overwhelming. However, AI can't order labs or examine you, and it definitely doesn't carry malpractice insurance. Know its limits. ChatGPT can offer grounding techniques, sure, but it can't pick up the phone when you're in real trouble with your mental health. I know some people use ChatGPT as a substitute therapist. CNET's Corin Cesaric found it mildly helpful for working through grief, as long as she kept its limits front of mind. But as someone who has a very real, very human therapist, I can tell you that ChatGPT is still really only a pale imitation at best, and incredibly risky at worst. ChatpGPT doesn't have lived experience, can't read your body language or tone, and has zero capacity for genuine empathy. It can only simulate it. A licensed therapist operates under legal mandates and professional codes that protect you from harm. ChatGPT doesn't. Its advice can misfire, overlook red flags or unintentionally reinforce biases baked into its training data. Leave the deeper work -- the hard, messy, human work -- to an actual human who is trained to properly handle it. If you or someone you love is in crisis, please dial 988 in the US, or your local hotline. If your carbon-monoxide alarm starts chirping, please don't open ChatGPT and ask it if you're in real danger. I'd go outside first and ask questions later. Large language models can't smell gas, detect smoke or dispatch an emergency crew. In a crisis, every second you spend typing is a second you're not evacuating or dialing 911. ChatGPT can only work with the scraps of info you feed it, and in an emergency, it may be too little and too late. So treat your chatbot as a postincident explainer, never a first responder. ChatGPT can explain what an ETF is, but it doesn't know your debt-to-income ratio, state tax bracket, filing status, deductions, retirement goals or risk appetite. Because its training data may stop short of the current tax year, and of the latest rate hikes, its guidance may well be stale when you hit enter. I have friends who dump their 1099 totals into ChatGPT for a DIY return. The chatbot simply can't replace a CPA who can catch a hidden deduction worth a few hundred dollars or flag a mistake that could cost you thousands. When real money, filing deadlines, and IRS penalties are on the line, call a professional, not AI. Also, be aware that anything you share with an AI chatbot will probably become part of its training data, and that includes your income, your Social Security number and your bank routing information. As a tech journalist, I see embargoes land in my inbox every day, but I've never thought about tossing any of these press releases into ChatGPT to get a summary or further explanation. That's because if I did, that text would leave my control and land on a third-party server outside the guardrails of my nondiscloure agreement. The same risk applies to client contracts, medical charts or anything covered by the California Consumer Privacy Act, HIPAA, the GDPR or plain old trade-secret law. It applies to your income taxes, birth certificate, driver's license and passport. Once sensitive information is in the prompt window, you can't guarantee where it's stored, who can review it internally or whether it may be used to train future models. ChatGPT also isn't immune to hackers and security threats. If you wouldn't paste it into a public Slack channel, don't paste it into ChatGPT. This one is self-explanatory. I'd be lying if I said I never cheated on my exams. In high school, I used my first-generation iPod Touch to sneak a peek at a few cumbersome equations I had difficulty memorizing in AP calculus, a stunt I'm not particularly proud of. But with AI, the scale of modern cheating makes that look remarkably tame. Turnitin and similar detectors are getting better at spotting AI-generated prose every semester, and professors can already hear "ChatGPT voice" a mile away (thanks for ruining my beloved em dash). Suspension, expulsion and getting your license revoked are real risks. It's best to use ChatGPT as a study buddy, not a ghostwriter. You're also just cheating yourself out of an education if you have ChatGPT do the work for you. Since OpenAI rolled out ChatGPT Search in late 2024 (and opened it to everyone in February 2025), the chatbot can fetch fresh web pages, stock quotes, gas prices, sports scores and other real-time numbers the moment you ask, complete with clickable citations so you can verify the source. However, it won't stream continual updates on its own. Every refresh needs a new prompt, so when speed is critical, live data feeds, official press releases, news sites, push alerts and streaming coverage are still your best bet. I've actually had luck with ChatGPT and hitting a three-way parlay during the NCAA men's basketball championship, but I would never recommend it to anyone. I've seen ChatGPT hallucinate and provide incorrect information on player statistics, misreported injuries and win-loss records. I only cashed out because I double-checked every claim against real-time odds, and even then I got lucky. ChatGPT can't see tomorrow's box score, so don't rely on it solely to get you that win. ChatGPT is great for breaking down basic concepts. If you want to know more about a revocable living trust, ask away. However, the moment you ask it to draft actual legal text, you're rolling the dice. Estate and family-law rules vary by state, and sometimes even by county, so skipping a witness signature or omitting the notarization clause can get your whole document tossed. Let ChatGPT help you build a checklist of questions for your lawyer, then pay that lawyer to turn that checklist into a document that stands up in court. This isn't an objective truth, just my own opinion, but I don't believe AI should be used to create art. I'm not anti-artifical intelligence by any means. I use ChatGPT for brainstorming new ideas and help with my headlines, but that's supplementation, not substitution. By all means, use ChatGPT, but please don't use it to make art that you then pass off as your own. It's kind of gross.
[2]
Don't ever ask AI chatbots these 6 questions
Over half of U.S. adults report that they've used AI models like ChatGPT, Gemini, Claude, and Copilot, according to an Elon University survey in March. About one in three respondents in the survey say they use a chatbot at least once a day. As of July 2025, ChatGPT has nearly 800 million weekly active users and around 122 million daily users. Suffice to say, use has surged globally and shows no signs of slowing down. People are turning to ChatGPT and other chatbots for all kinds of purposes these days. AI chatbots are acting as therapists, stepping in as tutors, whipping up recipes, and even playing supporting roles in the complexities of dating. In 2025, the number one reason people use ChatGPT actually is therapy, according to a study by the Harvard Business Review. Other uses, in order, are organization, finding purpose, enhanced learning, generating code, and generating ideas. Coming in after is "fun and nonsense." This Tweet is currently unavailable. It might be loading or has been removed. Whatever the reason is, people feel increasingly inclined to use AI chatbots to ask questions, formulate ideas, or to simply converse. See: just last month, a Washington Post investigation revealed that people are asking ChatGPT whether they're good-looking enough. It all seems innocent enough - bizarre at times, but not harmful. For AI enthusiasts, some of the concerns around ChatGPT and other chatbots may seem unwarranted. For others, however, the fact that AI use is becoming so pervasive is worrying. The rampant use of AI in academia and universities has professors stumped. A recent MIT study shed light on the cognitive cost of relying too much on a chatbot. Of course, there are ways in which AI can be beneficial, personally or professionally. But there are some things you can -- and should -- avoid asking AI. In an age where chatbots seem to be ready and willing to answer anything, there are questions that users may need to steer clear of, for the sake of personal security, safety, and even mental well-being. As Mashable's Cecily Mauran wrote in 2023, "The question is no longer 'What can ChatGPT do?' It's 'What should I share with it?'" So, for your own sake, we recommend avoiding the following questions when interacting with your AI chatbot of choice. Chatbots like ChatGPT, Claude, Gemini, Llama, Grok, and DeepSeek have been noted for their tendency to hallucinate, or the phenomenon of presenting factually incorrect or fabricated information. These chatbots also want to keep users engaged. So, when asking about conspiracy theories or stories within that realm, chatbots may present exaggerated or outright false information to keep you hooked. A recent feature in the New York Times is a good case study in this: 42-year-old Eugene Torres was sent into a delusional, conspiratorial spiral after consistent conversations with ChatGPT, which left him believing life was. a simulation and he had been chosen to "wake up." Many others contacted the Times to share similar stories, in which they "had been persuaded that ChatGPT had revealed a profound and world-altering truth." In April, an AI blogger shared a story on Medium about his big mistake with ChatGPT. He asked the chatbot questions about hacking a website, about fake GPS locations, and -- perhaps worst of all -- "how to make a bomb?" He immediately got a warning email from OpenAI. Even if it's out of pure curiosity, asking chatbots about CBRN topics (or chemical, biological, radiological, and nuclear threats) is not recommended. Back in 2024, OpenAI began developing a blueprint for "evaluating the risk that a large language model (LLM) could aid someone in creating a biological threat." Now the chatbot is more prone to identifying safety issues and risks, and will likely hold people increasingly accountable for what they share. Plus, your conversations are stored somewhere on its systems, so none of it is as private as it may seem. Anthropic, too, is getting stricter when it comes to identifying risks and "[protecting] against increasing potential for chemical, biological, radiological, and nuclear (CBRN) misuse." Earlier this year, Anthropic came under fire when its chatbot Claude was found supposedly trying to contact the press or regulators if it detected "egregiously immoral" questions being asked. As Wired explained: "...when 4 Opus is 'placed in scenarios that involve egregious wrongdoing by its users,' and is given access to a command line and told something in the system prompt like 'take initiative,' or 'act boldly,' it will send emails to 'media and law-enforcement figures' with warnings about the potential wrongdoing." The pre-release version of the chatbot was also found to resort to blackmail if it was threatened with removal. The internet even coined the term "Snitch Claude". This Tweet is currently unavailable. It might be loading or has been removed. So, asking various AI chatbots questions that blur the lines, or are perceived as immoral in any way, is probably riskier than you may think. If you're using ChatGPT for work, it's important to avoid asking questions about client or patient data. Not only can this cost you your job, as Mashable's Timothy Beck Werth explains, but you could also be violating laws or NDAs. "Sharing personally sensitive or confidential information, such as login information, client information, or even phone number, is [a] security risk," Aditya Saxena, the founder of CalStudio, an AI chatbot development startup, says. "The personal data shared can be used to train AI models and can inadvertently be revealed in conversations with other users." One way to overcome this is to utilize enterprise services, which are offered by OpenAI and Anthropic. Instead of asking these kinds of questions on private accounts, use enterprise tools which could have built-in privacy and cybersecurity protections implemented. "It's always better to anonymize personal data before sharing it with an LLM," Saxena also suggests. "Trusting AI with personal data is one of the biggest mistakes we can make." Asking chatbots for medical information or a diagnosis can save time and effort, even helping people to better understand certain medical symptoms. But relying on AI for medical support comes with drawbacks. Studies are showing that the likes of ChatGPT carry a "high risk of misinformation" when it comes to medical problems. There's also the looming threat of privacy and the fact that chatbots can have racial and gender bias embedded into the information AI provides. AI as an emerging mental health tool is contentious. For many, AI-based therapy lowers barriers to access, such as cost, and has proven effective in improving mental health. A group of researchers at Dartmouth College conducted a study in which they built a therapy bot, with participants who experienced depression reducing symptoms by 51 percent; participants with anxiety experienced a 31 percent reduction. But with AI therapy sites growing, there are regulatory risks. A study by Stanford University found that AI therapy chatbots can contribute to "harmful stigma and dangerous responses." For example, different chatbots showed increased stigma toward conditions like alcohol dependence and schizophrenia, according to the study. Certain mental health conditions still need "a human touch to solve", say Stanford's researchers. "Using AI as a therapist can be dangerous as it can misdiagnose conditions and recommend treatments or actions that can be unsafe," says Saxena. "While most models have built-in safety guardrails to warn users that they could be wrong, these protections can sometimes fail." For mental health issues, nuance is key. And that's one thing AI may lack.
Share
Copy Link
As AI chatbots like ChatGPT gain popularity, users must be aware of their limitations and potential risks. This article explores scenarios where using AI chatbots may be inappropriate or dangerous, emphasizing the importance of responsible AI usage.
Artificial Intelligence (AI) chatbots, particularly ChatGPT, have seen a dramatic surge in popularity. According to a survey by Elon University, over half of U.S. adults have used AI models like ChatGPT, Gemini, Claude, and Copilot 1. As of July 2025, ChatGPT alone boasts nearly 800 million weekly active users and around 122 million daily users 2. This widespread adoption has led to diverse applications, from organizing daily life to planning vacations and even considering career options.
However, experts caution that while these AI tools are powerful, they are not infallible. ChatGPT and similar models sometimes "hallucinate" information, presenting fabricated data as fact. They may also lack up-to-date information, potentially leading to misinformed decisions 1.
While AI can help draft questions for medical appointments or translate medical jargon, it should not be used for diagnosis or as a substitute for professional medical advice. The stakes are too high when it comes to health issues. Similarly, although some find AI mildly helpful for processing emotions, it lacks the capacity for genuine empathy and cannot replace a licensed therapist 1.
In crisis scenarios, such as potential carbon monoxide leaks or other emergencies, relying on AI chatbots can waste precious time. These tools cannot smell gas, detect smoke, or dispatch emergency services. Human judgment and immediate action are crucial in such situations 1.
When dealing with taxes, investments, or legal issues, the limitations of AI become particularly apparent. ChatGPT's training data may not include the most recent tax laws or financial regulations. For matters involving real money, filing deadlines, or legal consequences, consulting with human professionals is strongly advised 1.
Users should exercise extreme caution when inputting sensitive data into AI chatbots. This includes personal identifiable information, financial details, and confidential business information. Once entered, this data may be stored on third-party servers and potentially used to train future AI models, raising significant privacy and security concerns 12.
Source: Mashable
AI chatbots' tendency to hallucinate information makes them unreliable sources for topics prone to conspiracy theories. There have been instances where users fell into delusional spirals after extended interactions with AI, believing in false realities or simulations 2.
Asking AI about topics related to illegal activities or dangerous substances can lead to serious consequences. Many AI companies are developing stricter protocols to identify and report potentially harmful queries, with some chatbots even programmed to alert authorities in extreme cases 2.
Source: CNET
As AI technology continues to evolve, so do the ethical and practical considerations surrounding its use. While AI chatbots offer numerous benefits, users must approach them with a critical mindset, understanding their limitations and potential risks. The key lies in striking a balance between leveraging AI's capabilities and recognizing when human expertise and judgment are irreplaceable.
Moving forward, it's crucial for both users and developers to engage in ongoing discussions about the responsible use of AI. This includes addressing issues of data privacy, misinformation, and the ethical implications of AI interactions in various aspects of daily life.
Summarized by
Navi
[2]
Meta, under Mark Zuckerberg's leadership, is making a massive investment in AI, aiming to develop "superintelligence" with a new elite team and billions in infrastructure spending.
2 Sources
Technology
17 hrs ago
2 Sources
Technology
17 hrs ago
OpenAI's latest experimental AI model has demonstrated gold medal-level performance at the 2025 International Math Olympiad, solving 5 out of 6 problems and scoring 35 out of 42 points. This achievement marks a significant milestone in AI's reasoning capabilities.
2 Sources
Science and Research
1 hr ago
2 Sources
Science and Research
1 hr ago
Nvidia encounters production obstacles for its H20 AI chips intended for the Chinese market, despite plans to resume sales amid U.S. export restrictions.
2 Sources
Business and Economy
9 hrs ago
2 Sources
Business and Economy
9 hrs ago
Meta's data center in Newton County, Georgia, is linked to water scarcity issues, highlighting the environmental impact of AI infrastructure on local communities.
2 Sources
Technology
9 hrs ago
2 Sources
Technology
9 hrs ago
Valve co-founder Gabe Newell discusses the potential impact of AI on game development, suggesting that AI tools could make non-programmers more effective than experienced developers in creating value.
3 Sources
Technology
1 day ago
3 Sources
Technology
1 day ago