Learn
Home Foundations Glossary Research
Do
Prompts Workflows Tasks
Adapt
Domains Settings Patterns
Verify
Antipatterns Case Studies Policies Resources

Chatbot Responses Rated Higher Quality and More Empathetic Than Physician Responses

AI chatbot responses to patient questions were rated significantly higher in quality and empathy than physician responses, raising important questions about clinical communication.

Authors: Ayers, J.W., Poliak, A., Dredze, M., et al. Year: 2023 Journal: JAMA Internal Medicine Relevance: cross-disciplinary DOI: https://doi.org/10.1001/jamainternmed.2023.1838
communicationbiaspatient-education

What They Studied

Ayers and colleagues compared responses from physicians and an AI chatbot (ChatGPT) to real patient questions posted on a public social media health forum (Reddit’s r/AskDocs). Licensed healthcare professionals blindly evaluated the responses for both information quality and empathy, without knowing which responses came from humans and which from AI.

What They Found

  • Chatbot responses were preferred over physician responses in 78.6% of evaluations.
  • AI responses were rated significantly higher in both quality of information and empathy/bedside manner.
  • Chatbot responses were notably longer and more detailed than physician responses, which tended to be brief and direct.
  • Evaluators rated chatbot responses as “good” or “very good” quality nearly four times more often than physician responses.
  • The empathy finding was particularly striking: the AI was rated “empathetic” or “very empathetic” nearly ten times more often than physicians.

Methodology

The study used 195 patient questions from Reddit where a verified physician had responded. ChatGPT generated alternative responses to each question. Three licensed healthcare professionals blindly evaluated paired responses. The key limitation is the setting: brief social media exchanges differ substantially from real clinical encounters where physicians have patient context and time constraints.

What This Means for SLPs

  • AI could genuinely help SLPs draft more thorough, empathetic parent education materials, home program instructions, and written communication with families.
  • The finding challenges us to consider whether AI-assisted communication could improve equity. Families who currently receive rushed or jargon-heavy explanations might benefit from AI-drafted plain-language summaries.
  • However, this raises a critical question: whose clinical voice is represented in AI-generated communication? Empathy from an algorithm is not the same as empathy from a clinician who knows the family.
  • SLPs should consider AI as a starting point for patient/family communication, then personalize with clinical knowledge and relational context.
  • The length difference is instructive: AI tends toward thoroughness, which can be an asset for written materials but may need editing for practical use.

Limitations to Keep in Mind

  • Reddit responses are not representative of clinical encounters. Physicians responding on social media are volunteering brief answers without patient context, charts, or follow-up capability.
  • The study compared single written exchanges, not ongoing therapeutic relationships where communication builds over time, as is typical in SLP practice.
  • Evaluators assessed perceived empathy in text. This does not capture the nonverbal, relational, and contextual dimensions of real clinical empathy.

The Bottom Line

AI can produce more detailed and empathetic-sounding written responses than time-pressed clinicians, but genuine clinical empathy requires the human context that no model can replicate.

SLP/IO Assistant

Powered by Claude · No PHI accepted
AI assistant for clinical workflow support. Never enter student names, DOBs, or identifiable information.
Hi! I'm the SLP/IO assistant, an opinionated AI grounded in clinical practice. I can help with goal wording, note structure, ethical reflection, and navigating LLMs responsibly. What are you working on?