Informed Consent
The ethical obligation to inform clients and families when AI tools are used in any part of their care, including documentation. Most clients assume their SLP writes their reports. If an LLM drafts your eval summary or generates goal suggestions, your client has a right to know. ASHA's Code of Ethics requires honesty and transparency; informed consent for AI use is an extension of that obligation.
Transparency requirements around AI use in clinical settings. As AI tools become embedded in healthcare documentation workflows, regulatory and professional bodies increasingly require disclosure of AI involvement. This includes informing clients about what data is entered into AI systems, how that data is processed, and whether it is stored or used for model training.
Why SLPs Need to Know This
Most AI tools in clinical use right now exist in a regulatory gray area. ASHA has issued guidance but not binding rules. HIPAA addresses data privacy but not AI-specific disclosure. That means the ethical burden falls on you. If you paste session notes into ChatGPT to draft a progress report, your client’s protected health information may be processed by a system with no BAA in place, and your client almost certainly does not know.
Practical Guide
- Disclose AI use in your intake paperwork. A simple statement that AI tools may be used for documentation tasks
- Specify what data enters the AI system. Clients should know if their name, diagnosis, or session details are being processed
- Clarify what the AI does and does not do. “AI helps organize my notes; all clinical decisions are mine”
- Know your tool’s data policy. Does the platform store inputs? Use them for training? Operate under a BAA?
- Document the disclosure. Treat AI consent like any other informed consent in your records
Clinical Impact
- Failure to disclose AI use could constitute an ethics violation under ASHA’s Code of Ethics Principle I, Rule A (honesty in professional relationships)
- Parents of children with IEPs may have strong feelings about AI involvement in their child’s documentation
- Clients from marginalized communities may have legitimate concerns about their data being used to train AI systems
- Transparent disclosure builds trust. Hidden AI use, when discovered, destroys it
Related Terms
- Bias: informed consent should include awareness that AI tools may introduce biased language or framing
- Interoperability: when AI connects to other systems, the data exposure surface grows, making disclosure more important