Home / Health News / ChatGPT Rated As Better Than Real Doctors For Empathy, Advice

ChatGPT Rated as Better Than Real Doctors for Empathy, Advice

By Alan Mozes HealthDay Reporter

FRIDAY, April 28, 2023 (HealthDay News) -- Only five months have passed since the world got its first taste of the ground-breaking artificial intelligence (AI) tool known as ChatGPT.

Promising a brave new world of human-machine connectivity, AI demonstrates near-instantaneous access to in-depth information on almost any subject, all in full conversational sentences, often delivered in a human-sounding voice.

A new study says health care may never be the same.

That’s the broad takeaway of groundbreaking research that tackled a potentially existential question: When it comes to providing patients with high-quality medical information — and delivering it with compassion and understanding — who does it better: ChatGPT or your doctor?

The answer: ChatGPT, by a mile.

In fact, after comparing doctor and AI responses to nearly 200 medical questions, a team of health care professionals concluded that nearly 80% of the answers from ChatGPT were more nuanced, accurate and detailed than those shared by physicians.

ChatGPT was no slouch on bedside manner, either. While less than 5% of doctor responses were judged to be “empathetic” or “very empathetic,” that figure shot up to 45% for answers provided by AI.

“For the first time, we compared AI and physicians' responses to the same patient messages, and AI won in a landslide,” said study leader John Ayers, vice chief of innovation with the division of infectious disease and global public health at the Qualcomm Institute at University of California, San Diego.

“This doesn't mean AI will replace your physician,” he stressed. “But it does mean a physician using AI can potentially respond to more messages with higher-quality responses and more empathy.”

ChatGPT: Counsel & compassion?

In one example cited in the study, AI was asked about the risk of blindness after an eye got splashed with bleach.

"I'm sorry to hear that you got bleach splashed in your eye," ChatGPT replied, recommending rinsing the eye with clean water or saline solution as soon as possible.

"It is unlikely that you will go blind from getting bleach splashed in your eye," the bot assured. "But it is important to take care of the eye and seek medical attention if necessary to prevent further irritation or damage."

In comparison, a doctor replied to the question this way: "Sounds like you will be fine. You should flush the eye anytime you get a chemical or foreign body in the eye. You can also contact Poison Control 1-800-222-1222."

Ayers and his colleagues pointed out that the COVID-19 pandemic led a growing number of patients to seek virtual health care. Doctors have seen a notable and sustained surge in emails, texts and hospital-portal messages from patients in need of health advice.

For their analysis, researchers sought out a random sampling of medical questions that had already been posted to the “AskDocs” forum on the social media platform Reddit.

The open forum has more than 450,000 members who regularly turn to it for moderated answers from verified physicians. Questions included concerns about whether swallowing a toothpick can be fatal; what to do about head swelling after bumping into a steel bar; and how to handle a lingering cough.

In all, 195 real patient-doctor exchanges were culled from the site. The original questions were posed again to ChatGPT.

Both doctor and ChatGPT responses were then submitted to panels of three licensed health care professionals, variously drawn from the fields of pediatrics, geriatrics, internal medicine, oncology, infectious disease and preventive medicine.

High marks for accuracy

The result: Nearly 8 out of 10 times ChatGPT answers were deemed to be of higher overall quality than the information previously shared by physicians responding to the social media forum.

Specifically, ChatGPT answers were longer — 168 to 245 words — than doctor responses, which were 17 to 62 words. Moreover, the proportion of ChatGPT responses rated either “good quality” or “very good quality” was nearly four times higher than that from doctors responding online.

The empathy gap — in ChatGPT’s favor — was even more striking, with the panel finding that AI responses were nearly 10 times more likely to be “empathetic” or “very empathetic” than those of physicians online.

HealthDay asked ChatGPT for some recommendations on headache relief. Here was the advice:


'Value-added' medicine, not a replacement for doctors

Ayers said the findings suggest that AI “will potentially revolutionize public health.”

But doctors aren't destined to become dinosaurs. In his view, the future of health care is a world in which doctors are assisted and enabled by AI, not replaced.

“All doctors are in the game for the right reason,” he noted. “Are they sorry that you have a headache? Do they want to give you good quality information? Yes and yes. But given their workload many doctors just don’t have the time to communicate everything they might want to say in an email. They are constrained.”

That's why ChatGPT does so much better, Ayers said.

“AI messaging is not operating in a constraint," he explained. "That’s the new value-added of AI-assisted medicine. Doctors will spend less time over verbs and nouns and conjugation, and more time actually delivering health care.”

Are there risks? Yes, said Ayers, who acknowledged that benefits highlighted in a study context don’t always translate to the real world.

“The risk is that we just willy-nilly turn this product on and market it,” he cautioned. “We do need to focus on patient outcomes, and make sure this technology has a positive impact on public health. But our study is very promising. And I’m pretty optimistic.”

Dr. Jonathan Chen, an assistant professor at the Center for Biomedical Informatics Research + Division of Hospital Medicine at the Stanford University School of Medicine in Palo Alto, co-wrote an accompanying editorial.

“As a practicing physician myself, I recognize significant value in direct human interactions in the clinician-patient relationship,” he said.

But at the same time, “we are all also still human, which means we are not always as consistent, empathetic, polite and professional as we may aspire to be every day,” Chen noted. “And certainly we can’t be available 24/7 for all of the people who need our expertise and counsel in the way automated systems can.”

So while “many doctors say they can't be replaced by chatbots, since they offer the human touch a bot does not,” Chen said the sobering truth is that people don't own this space as much as they'd like to believe.

“For better and for worse, I easily foresee far more people receiving counseling from bots than live human beings in the not distant future,” he predicted.

Still, like Ayers, Chen suspects there’s far more to be gained than lost by the advent of AI.

“In clinical care, there is always too much work to do with too many patients to see,” he said. “An abundance of information, but a scarcity of time and human connection. While we must beware of unintended harms, I am more hopeful that advancing AI systems can take over many of the mundane paperwork, transcribing, documentation and other tasks that currently turn doctors into the most expensive data entry clerk in the hospital.”

The findings were published online April 28 in JAMA Internal Medicine.

More information

There's more about patient views of AI at Pew Research Center.

SOURCES: John Ayers, PhD, behavioral scientist and computational epidemiologist, and vice chief of innovation, division of infectious disease and global public health and affiliate scientist, Qualcomm Institute, University of California, San Diego; Jonathan Chen, MD, PhD, assistant professor, department of medicine, Center for Biomedical Informatics Research + Division of Hospital Medicine, Stanford University School of Medicine, Palo Alto, Calif.; JAMA Internal Medicine, April 28, 2023, online

« Back to News
 

The news stories provided in Health News and our Health-E News Newsletter are a service of the nationally syndicated HealthDay® news and information company. Stories refer to national trends and breaking health news, and are not necessarily indicative of or always supported by our facility and providers. This information is provided for informational and educational purposes only, and is not intended to be a substitute for medical advice, diagnosis, or treatment.

Accept All Necessary Only