×

Urology Pearls: Finding the right words

Can artificial intelligence be more empathetic than a doctor? Can it have better bedside manners?

Dr. Jonathan Reisman thinks so. In a recent article in the New York Times, with the dramatic title “I’m a Doctor. ChatGPT’s Bedside Manner Is Better Than Mine,” he writes about his days as an idealistic, young doctor: “I was certain that the other side of practicing medicine, the human side, would keep my job safe. This side requires compassion, empathy, and clear communication between doctor and patient. As long as patients were still composed of flesh and blood, I figured, their doctors would need to be, too. The one thing I would always have over A.I. was my bedside manner.”

Now, something has changed for Dr. Reisman. He recalls that, in medical school, he was taught to follow a list of dos and don’ts: use a line such as “I wish I had better news,” before delivering a harsh, life-changing diagnosis, for example, and avoid using unclear or ambiguous medical terms. By using these and other scripted lines, “compassion and empathy could be choreographed like a set of dance steps marked and numbered on the floor.”

If human compassion and empathy are a series of linguistic formulas, Dr. Reisman posits, why shouldn’t we lend the job of communication with our patients to artificial intelligence, a tool specifically designed to quickly learn and imitate our voices in such efficiency that we often can’t tell the difference between our own human voice and that of a machine?

To support his point, Dr. Reisman references a study in which responses to patients’ questions written by ChatGPT — the most advanced AI system currently available–were rated more emphatic than those of actual doctors. I felt compelled to read the article and immediately report back to you.

The study was published in JAMA Internal Medicine in April 2023. It compared physicians’ and artificial intelligence responses to patients’ questions posted to a public social media forum. Will ChatGPT’s responses be as informative and empathetic as those written by physicians?

The researchers selected several questions and answers from a Reddit forum called AskDocs. This section allows verified physicians to voluntarily address questions posted on that section by Reddit’s members. The same selected questions were then presented to ChatGPT. The responses of the human responders on Reddit and those of ChatGPT were then presented to several healthcare professionals who rated the responses according to the “quality of information provided,” and “the empathy or bedside manner provided.”

The questions involved serious concerns. One question, for example, was about the risk of dying after swallowing and ingesting a toothpick. Another was about the risk of going blind following bleach being splashed into an eye, resulting in irritated and dry eye. Altogether, the researchers evaluated the responses to a total of 195 questions.

The results were surprising. The chatbot responses were longer and more detailed (211 words compared with only 52 words), of higher quality (3.6 times more likely to be rated as good or very good quality), and, perhaps shockingly, more emphatic.

In reading ChatGPT responses, I was impressed with the degree of demonstrated empathy. In response to the patient who swallowed a toothpick, it started by saying, “It’s natural to be concerned if you have ingested a foreign object, but in this case, it is highly unlikely that the toothpick you swallowed will cause you any serious harm.”

And answering the bleach splashing into an eye, it began by saying, “I’m sorry to hear that you got bleach splashed in your eye … “ and concluded with “It is unlikely that you will go blind from getting bleach splashed in your eye, but it is important to take care of the eye and seek medical attention … “ The human response was much shorter, and lacking in empathy: “Sounds like you will be fine. You should flush the eye anytime you get a chemical or foreign body in the eye … “

Here is my take: AI may be better at finding the right words in an emotionally charged medical conversation. It would never be overworked like many physicians are, or suffer from compassion fatigue that is so prevalent in modern medicine. But there is one sentence in Dr. Reisman’s article that shook me to the core and with which I wholeheartedly disagree. He writes: “In the end, it doesn’t actually matter if doctors feel compassion or empathy toward patients; it only matters if they act like it. In much the same way, it doesn’t matter that A.I. has no idea what we, or it, are even talking about.”

Really? I believe that in real-world human interactions, patients can sense the authenticity of their doctor’s emotions, even when it’s expressed in imperfect words. A warm handshake, a comforting smile, a shaky voice delivering an ominous diagnosis — these tokens of shared humanity may be imitated, but can’t (at least yet) be duplicated by the cold, indifferent, albeit perfectly phrased, “empathetic voice” of AI.

EDITOR’S NOTE: Dr. Shahar Madjar is a urologist working in several locations in the Upper Peninsula. Contact him at smadjar@yahoo.com or at DrMadjar.com.

Starting at $2.99/week.

Subscribe Today