Topic: Public Health and Patient Safety
đź“” Topics / Public Health and Patient Safety

Public Health and Patient Safety

1 Story
2 Related Topics
Studies Find ChatGPT Often Misguides Patients on Urgent Medical Care
An NPR report on March 11, 2026 details new research warning that popular AI chatbots like ChatGPT can mislead people seeking medical advice, especially about how urgently they need care. In a Nature Medicine study that tried to mimic how laypeople actually use AI, participants who consulted chatbots correctly identified a hypothetical condition only about a third of the time, and only 43% chose the right next step, such as going to the ER or staying home. A separate study found that in 52% of emergency scenarios, chatbots under‑triaged the problem, including a case of diabetic ketoacidosis with impending respiratory failure where the bot failed to send the patient to an emergency department. Researchers say small differences in wording—such as whether a headache is described as "the worst ever"—can change advice from "go to the ER now" to "take aspirin and stay home," underscoring how non‑experts may not know which symptoms to highlight. OpenAI disputes that the studies reflect typical ChatGPT use and notes they relied on older model versions, but physicians and AI researchers interviewed by NPR argue that while these tools can help explain conditions or prepare for doctor visits, they should not be treated as a substitute for professional triage, particularly in time‑sensitive emergencies.
AI in Health Care Public Health and Patient Safety Technology Regulation and Ethics