© 2026 WUKY
background_fid.jpg
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

'ChatGPT saved my life.' How patients, and doctors, are using AI to make a diagnosis

Digital doctor on a free flat design background. Online medical Q&A concept.
LangPhoto/iStockphoto
/
Getty Images
Digital doctor on a free flat design background. Online medical Q&A concept.

Start reading recent internet conversations about AI, and you'll find an anecdote that surfaces with increasing frequency: ChatGPT delivered lifesaving medical advice.

"Three weeks ago I woke up from a nap and found some red spots all over my legs," begins one such account in a video from Bethany Crystal, who runs a consulting business and lives in New York. After an exchange with ChatGPT, she recounts it telling her, "You need immediate evaluation for possible bleeding risk."

"What ensued was a harrowing three day experience that got increasingly scary," says Crystal, who was eventually diagnosed with a rare autoimmune disorder called immune thrombocytopenic purpura that can lead to low platelets and increased bleeding. She says she may not have gone to the emergency room in time if ChatGPT had not been insistent.

Hundreds of millions of people now consult ChatGPT weekly for wellness advice, according to its maker, OpenAI. In early January, the company announced the launch of a new platform, ChatGPT Health, which it says offers enhanced security for sharing medical records and data. It joins other AI tools such as My Doctor Friend in promising to partner with patients on navigating health care.

Doctors and patients say AI is already having a profound impact on both the way that patients receive information about their health and practitioners' ability to diagnose and communicate with their patients.

Unlimited time to engage

There's a saying in medicine: "If you hear hoofbeats, think of horses not zebras." In other words, the most obvious problem is usually the problem. This is often the default approach to making a diagnosis for time-crunched doctors.

"I've heard from a number of patients who said, 'Well, guess what? I'm a zebra,'" says Dave deBronkart, a cancer survivor who writes about patients using AI to help with medicine.

Unlike doctors, ChatGPT has nearly unlimited time to engage in exhaustive inquiry with patients. deBronkart says he often hears stories about AI identifying symptoms that differentiate unusual or rare conditions from more common ailments.

Moreover, he points out, AI's diagnostic catalogues go beyond generalized medical knowledge. "Turns out my doctors are really good at horses," says deBronkart. "They just don't know all the special stuff."

A new kind of patient

Burt Rosen uses AI to help manage symptoms and treatment for the two different kinds of cancer he's been diagnosed with.
Burt Rosen /
Burt Rosen uses AI to help manage symptoms and treatment for the two different kinds of cancer he's been diagnosed with.

Many patients recount using different AI platforms to help with daily well-being and management of chronic conditions as a complement to oversight from medical professionals.

Sixty-year-old Burt Rosen – who works in marketing for a local Oregon college – uses it to help manage symptoms and treatment for the two different kinds of cancer he's been diagnosed with, renal clear cell carcinoma and a pancreatic neuroendocrine tumor.

"I'm in the, 'I went to the cancer store on the buy one, get one free day,'" he jokes.

Recently, says Rosen, he told AI he was experiencing migraines and nausea after sleeping. AI asked him what position he was sleeping in and suggested he use two pillows instead of one. Pressure can build when lying flat, it explained, and cause migraines.

His headaches disappeared.

Rosen also uses it to track his symptoms over time in order to find correlations with diet or other triggers, or to understand the range of treatment options. He frequently shows it test results and asks it to translate them into comprehensible English.

A favorite trick, says Rosen, is asking AI to write in the voice of Jerry Seinfeld — something that is amusing but also makes information about his disease more memorable. " I mean, one cancer is bad enough!" reads a recent Seinfeld translation. "But two, what's the deal with that?"

Rosen says AI ha s changed the relationship he has with his oncologist.

" When I go into a doctor's appointment, I'm no longer going in to have him explain to me my scans or my conditions," he says. "My doctor's appointment is much more of an action-planning session."

Risks to trusting AI

The list of unanswered questions and potential hazards of using AI in medicine is long.

As a consumer product, ChatGPT Health is not regulated by health privacy laws the way a medical provider's systems are in a clinical setting

When it comes to mental health, OpenAI is currently named in multiple active lawsuits alleging psychological harm, including suicide-related claims.

Patients and doctors stress that AI is not a replacement for a doctor, and that considering it as such is dangerous. Doctors say that without clinical oversight, misdiagnosis, misleading advice, or human misunderstanding are significant problems.

Dr. Robert Wachter, chair of the Department of Medicine at University of California, San Francisco — author of the forthcoming book A Giant Leap, how AI is Transforming Healthcare and What that means for Our Future — says he's seen the risks first hand. Wachter recounts a recent case of AI advising a patient to try the anti-parasitic drug ivermectin as a treatment for testicular cancer.

" It probably wouldn't hurt you, but what would hurt you is you not getting appropriate treatment for your cancer that is treatable," he says. "So, the capacity for badness here is pretty high."

In one documented case, a 60-year old man consumed sodium bromide and experienced paranoia and hallucinations after consulting with ChatGPT on decreasing salt intake.

Despite these hazards, Wachter is optimistic about the contributions AI can make to health care and believes the benefits will eventually outweigh the dangers, if they don't already. " I actually think it's going to be a really good thing," he says.

Studies show that large language models are competitive with humans in simulated tests of diagnostic reasoning. A study published in the New England Journal of Medicine found that AI systems could frequently identify difficult cases; a follow-up comparison with a leading human diagnostician showed a slight human advantage. Still, says Wachter, "the AI's performance was remarkable."

Wachter says AI has already significantly improved his own work and that of his colleagues. He now uses a tool called AI Scribe that allows him to look his patients in the eye while they talk. "Two years ago I would've been sitting there pecking away on my computer."

In a matter of months, he says, he's also seen widespread adoption among his colleagues of a tool called OpenEvidence – "kind of a ChatGPT for doctors,"which gives them exhaustive knowledge at their fingertips.

"I use it all the time," he says. "We all do."

The future of health care

Patients and doctors who are using AI in health care say that the rate at which it is becoming integrated into the system is staggering. " AI is already a core part of my care team," says Rosen.

At 60, Rosen acknowledges he's unusually technology literate. The next generation of patients and doctors, he observes, will not have the same learning curve. "Two generations from now," he says. "No one will give it a second thought."

Medicine and health care in the United States is unique, says Wachter, in that the system is so deeply flawed — and in need of so much help.

"If you ask me, what do you think about AI in general, I'm worried," he says. "I'm worried about what it does to our politics, deep fakes, jobs — all those things are very real," he says. "It's just in the corner of the world that I work in, I just see a system that is  falling apart and can't possibly meet the needs of people without this kind of help."

Copyright 2026 NPR

Katia Riddle
[Copyright 2024 NPR]