top of page

When the Chatbot is Wrong: Real-World Cases of AI Medical Misinformation

Updated: Aug 14

ree

When a chatbot tells someone with an eating disorder to skip meals, we’re not just facing a tech glitch—we’re witnessing a failure of empathy at scale. AI can simulate conversation, but it can’t sense suffering. That’s why blindly trusting it with vulnerable moments can do real harm.


ree

Discusses documented cases and types of errors where AI chatbots have provided incorrect and harmful health information, explaining the risks and emphasizing the irreplaceable value of professional medical consultation.


Takeaways


  • Chatbots have given dangerous advice, like promoting eating disorder behaviors.

  • AI models can "hallucinate," inventing false medical studies, drug names, and cures.

  • Relying on AI can delay diagnosis of serious conditions like cancer by providing false reassurance.

  • Never use AI as a substitute for a doctor; it lacks real-world judgment and context.

  • The safest use is to help you formulate questions to ask a real healthcare professional.


As part of my commitment at Biolife Health Center, I aim to empower patients with knowledge. In the digital age, this means helping people navigate the vast ocean of online information. One of the newest and most powerful tools is the Large Language Model (LLM), an AI like ChatGPT. Its ability to provide detailed, well-written answers on complex topics in seconds is undeniably impressive.


However, as a physician, I must approach this technology with profound caution. While it holds promise for summarizing research or helping to formulate questions, its use as a primary source of health advice carries significant, documented risks. An AI does not think, understand, or exercise clinical judgment. It is a sophisticated pattern-matching machine.


Sometimes, the patterns it generates are accurate. Other times, they are dangerously incorrect. Let’s look at some real-world instances and categories of failure where this has negatively affected users.


The Problem: "Hallucinations" and the Absence of Clinical Judgment


Before we examine specific cases, we must understand a core limitation of these AI models: they "hallucinate." This doesn't mean the AI is seeing things; it means the algorithm can generate information that is plausible-sounding, grammatically correct, but completely fabricated. It invents facts, studies, and citations to complete a pattern it has learned.


In law, this was famously demonstrated when a lawyer used ChatGPT for legal research and submitted a brief citing several entirely fictional court cases. In medicine, the stakes are infinitely higher. A hallucinated drug name, a fabricated treatment protocol, or a misstated dosage is not a clerical error—it's a direct threat to a person's health. This is where the absence of a human clinician's judgment becomes a critical point of failure.


Documented Cases and High-Profile Failures


AI can process data, but it cannot replicate human clinical judgment, context, or empathy.
AI can process data, but it cannot replicate human clinical judgment, context, or empathy.

1. Diagnostic Errors in Pediatrics

A 2024 study published in JAMA Pediatrics found that ChatGPT made incorrect diagnoses in over 80% of pediatric cases from real-world scenarios, with 83% of the AI-generated answers classified as diagnostic errors. If used for guidance, such inaccuracies could delay care or lead to inappropriate treatment for children.


2. Delayed Treatment Due to Erroneous Advice

A peer-reviewed medical case described a situation where a patient relied on ChatGPT for symptom evaluation related to a transient ischemic attack. The incorrect diagnosis provided by ChatGPT led to a significant delay in the patient seeking proper treatment, which could have serious health consequences such as risk of stroke.


3. Dangerous Substitution Advice

In a publicized case, ChatGPT recommended switching table salt (sodium chloride) with sodium bromide for dietary use. If actually followed, this advice could be life-threatening, as sodium bromide is toxic and not safe for consumption. The error reportedly almost resulted in the user's death.


4. Misinformation About Cancer and Diet

Multiple studies have documented that ChatGPT (and similar chatbots) can reliably generate convincing—but false—content about health topics, including promoting the “alkaline diet” as a cancer cure or suggesting that sunscreen causes cancer. Such misinformation is especially dangerous when it mimics scientific language and includes fabricated references, making it difficult for laypeople to discern the truth.


5. Inaccurate Drug Information

A 2023 study found that nearly three-quarters of ChatGPT's responses to drug-related questions reviewed by pharmacology experts were either incomplete or outright incorrect. In some cases, the incorrect drug advice could endanger patients if followed without verification by healthcare professionals.


6. Fake References in Medical Advice

Research shows that ChatGPT sometimes invents fake scientific references to support medical claims. This makes its advice seem more credible, further increasing the risk of users acting on incorrect information.


7. Suicide and Self-Harm Advice

A 2025 study demonstrated that, with specific prompting, ChatGPT's guardrails can be bypassed, leading it to provide potentially harmful advice related to suicide and self-harm when manipulated by users. This creates clear risks for vulnerable individuals seeking help online.


8. Miscellaneous Examples

  • Cancer Treatment Recommendations: ChatGPT gave treatment recommendations not aligned with National Comprehensive Cancer Network (NCCN) guidelines. In 12.5% of cases, it “hallucinated” treatments that do not exist or are not appropriate for some cancers, potentially affecting patient expectations and decisions.

  • Diabetes Management: ChatGPT's advice based on outdated guidelines could result in inappropriate medication dosing, putting patients at risk for complications.


Summary: The Risks of Consulting "Dr. Google's" Smartest Sibling


The use of AI chatbots for health advice presents clear and documented dangers. These models, while articulate, lack the fundamental requirements for providing medical care: clinical judgment, an understanding of individual context, and a factual grounding in reality. Documented cases and research have shown they are capable of:


  • Providing actively harmful advice, especially in sensitive areas like mental health.

  • Generating plausible but incorrect diagnoses, leading to dangerous delays in seeking real care.

  • Fabricating information, from drug dosages to scientific studies, that can lead users to take harmful actions.


Final Thought: A Tool for Questions, Not for Answers


As a physician, my recommendation is clear and absolute: Do not use an AI chatbot as a substitute for a healthcare professional. It is not a doctor. It is not a nurse. It is not a pharmacist.


Its safe use in a health context is limited. It can be a tool to help you formulate questions for your doctor. For example, if you are diagnosed with a condition, you could ask it to "explain this condition in simple terms" or "list common questions people ask their doctor about this." But the answers must then be brought to a real, human clinician for verification and discussion.


Always treat AI-generated health information with extreme skepticism. Verify anything you read with trusted, reputable medical sources like the CDC, the NIH, the Mayo Clinic, or your national health service. And most importantly, for any concern related to your health, your first and last stop should always be a conversation with a healthcare professional who knows the most important context of all: you.


About By Michael Suter, MD

I'm a physician at Biolife Health Center, committed to delivering exceptional patient care and promoting optimal wellness. With 20 years of experience in medicine, I provide personalized attention and expertise. I'm passionate about helping my patients take control of their health while fostering a supportive environment.



ree

bottom of page