AI (Artificial Intelligence) concept.
Credit: metamorworks/Getty Images

A new study from researchers at the Qualcomm Institute at the University of California San Diego (UCSD) has found that the quality and empathy of written responses to questions from patient was preferred from ChatGPT than it was from real-world physicians. The research found that when presented to a panel of licensed physicians, asked to compare the AI engines written answers with those from physicians, ChatGPT responses were preferred 79% of the time.

“The opportunities for improving healthcare with AI are massive,” said John W. Ayers, PhD, vice chief of innovation in the UCSD School of Medicine Division of Infectious Disease and Global Public Health. “AI-augmented care is the future of medicine.”

The study set out out to answer a very basic question about the potential benefits of using AI technology in the clinical environment: “Can ChatGPT respond accurately to questions patients send to their doctors? The intent was to see if the artificial intelligence technology could bridge the gap between simply finding and reporting health information, to the soft skills of interpretation needed for care in a clinical environment.

“ChatGPT might be able to pass a medical licensing exam,” said study co-author Davey Smith, MD, PhD, co-director of the UCSD Altman Clinical and Translational Research Institute and professor at the UCSD School of Medicine, “but directly answering patient questions accurately and empathetically is a different ballgame.”

In this research, the investigators turned to the social media platform Reddit to gather the questions they would pose to both ChatGPT and physicians. The questions came the from the subReddit called AskDocs, a moderated board that qualifies whether medical professionals answer posted questions. AskDocs currently has more than 450,000 registered members.

While it is fair to be skeptical of whether question-answer exchanges on social media are a fair test, team members noted that the exchanges were reflective of their clinical experience.

From the questions on AskDocs, the investigators randomly chose 195 exchanges in which a verified physician responded to a public question. The team provided the original question to ChatGPT and asked it to author a response. Three licensed healthcare professionals, who were blinded to the source of the responses assessed each question and the corresponding responses and compared responses based on information quality and empathy, noting which one they preferred. ChatGPT responses were preferred to physician responses 79% of the time.

“I never imagined saying this,” said Aaron Goodman, MD, associate clinical professor at UCSD School of Medicine and study coauthor, “but ChatGPT is a prescription I’d like to give to my inbox. The tool will transform the way I support my patients.”

However, like many AI tools that are employed in shaping patient care and clinical decision making, the researchers are clear that tools such as ChatGPT aren’t in a position to replace doctors, rather to serve as a resource to be incorporated in creating a treatment regimen for patients, and that evidence on their use needs rigorous study.

“It is important that integrating AI assistants into healthcare messaging be done in the context of a randomized controlled trial to judge how the use of AI assistants impact outcomes for both physicians and patients,” said study co-author Mike Hogarth, MD, co-director of the Altman Clinical and Translational Research Institute at UCSD.

Added Mark Dredze, PhD, an associate professor of Computer Science at Johns Hopkins and study co-author: “We could use these technologies to train doctors in patient-centered communication, eliminate health disparities suffered by minority populations who often seek healthcare via messaging, build new medical safety systems, and assist doctors by delivering higher quality and more efficient care.”

Also of Interest