Considering the Possibilities and Pitfalls of GPT-3 in Healthcare Delivery
An early ethical analysis of generative AI in healthcare that identified bias, privacy, and misinformation risks that remain central to responsible clinical AI use today.
What They Studied
Korngiebel and Mooney published one of the earliest analyses of how GPT-3, the precursor to today’s clinical AI tools, might be used in healthcare delivery, and what ethical risks that use would introduce. They examined the technology’s potential applications alongside concerns about bias, misinformation, patient privacy, and the broader implications of deploying generative AI in clinical settings.
What They Found
- GPT-3 showed potential for clinical documentation, patient communication, and clinical decision support, but with significant caveats at every level.
- The model reproduced and sometimes amplified biases present in its training data, including biases related to race, gender, and socioeconomic status.
- Privacy risks were substantial: any system that processes patient information through external AI models creates potential for protected health information (PHI) exposure.
- The model could generate confident, plausible-sounding but factually incorrect medical information, with no built-in mechanism to flag uncertainty.
- The authors argued that existing healthcare regulatory frameworks were insufficient to address the unique risks of generative AI.
Methodology
This was an analytical perspective piece rather than an empirical study. The authors combined technical analysis of GPT-3’s architecture and training data with ethical frameworks from healthcare and bioethics. While not a systematic review, it drew on established principles of medical ethics and emerging AI governance literature to frame the analysis.
What This Means for SLPs
- This paper provides the ethical framework that should guide any SLP’s use of AI tools in clinical practice. The concerns it raised in 2021 remain largely unresolved.
- PHI considerations are directly relevant: SLPs must understand whether the AI tools they use transmit patient data externally, and what happens to that data.
- Bias risks are especially important for SLPs working with culturally and linguistically diverse populations, since AI tools trained primarily on English-language, majority-population data may produce culturally inappropriate recommendations.
- The “confident but wrong” problem means that SLPs who use AI-generated content must maintain the clinical expertise to catch errors, not defer to AI authority.
- Any institutional policy on AI use in SLP practice should address the ethical dimensions this paper outlines: bias auditing, PHI protection, informed consent, and clinician override.
Limitations to Keep in Mind
- The analysis focused on GPT-3, which is now several generations old. Current models have some improved safeguards, though the fundamental concerns persist.
- As a perspective piece, this does not present original empirical data; it is an expert analysis, not a clinical trial.
- The healthcare focus was broad (medicine, not allied health), so SLP-specific ethical considerations around communication disorders and vulnerable populations were not directly addressed.
The Bottom Line
The ethical risks of generative AI in healthcare (bias, privacy violations, and confident misinformation) were clear from the beginning and remain the essential framework for responsible clinical adoption.