Learn
Home Foundations Glossary Research
Do
Prompts Workflows Tasks
Adapt
Domains Settings Patterns
Verify
Antipatterns Case Studies Policies Resources

What the Evidence Says

Current evidence (2025–2026) supports LLMs as documentation copilots, but not as independent authors for clinical or legal documents.

Current evidence (2025–2026) supports LLMs as documentation copilots that improve efficiency and structure, but not as independent authors for clinical or legal documents. Here is an honest summary across disciplines.

Supported by Evidence

  • LLMs improve documentation efficiency and reduce time spent writing notes
  • Outputs tend to be more organized, structured, and readable
  • Strongest results when used as a “second pass” tool: revising, summarizing, restructuring
  • Emerging evidence for detecting biased or stigmatizing language in clinical notes
  • Cross-disciplinary consistency in medicine, OT, and emerging SLP applications

Not Supported by Evidence

  • Independent note writing without clinician review
  • Diagnostic or eligibility decisions
  • Handling sensitive data in public tools (violates HIPAA/FERPA)
  • Replacing clinical voice with generic polished language
  • Factual accuracy without verification (hallucination risk remains real)

Bottom Line

LLMs perform best as documentation copilots, not independent authors. They are strongest at organizing, refining, and structuring. They are weakest at interpreting, individualizing, and deciding.

A Note on SLP-Specific Evidence

The research base for LLMs in SLP-specific workflows remains limited. Claims in this guide about SLP applications are informed by cross-disciplinary evidence from medicine, nursing, and occupational therapy, combined with clinical judgment. Where evidence is strongest (structured note revision, bias detection) we say so; where it is extrapolated, we flag it. Readers are encouraged to consult current systematic reviews as this field evolves rapidly.

Frequently Asked Questions

Is there evidence that AI improves clinical documentation?

Yes. Van Veen et al. (2024) found that adapted LLMs can outperform medical experts in clinical text summarization . Thirunavukarasu et al. (2023) found that outputs tend to be more organized and readable . The strongest evidence supports LLMs as a “second pass” tool for revising and restructuring existing notes, not for generating notes from scratch.

Can AI write my progress notes for me?

The evidence does not support using AI as an independent note writer. LLMs can organize and structure your raw observations, but every output requires clinician review. The risk of hallucination (the model adding details you didn’t provide) makes unsupervised documentation generation unsafe for clinical or legal records.

Is there research on AI specifically for speech-language pathology?

The SLP-specific evidence base remains limited as of 2026. Most research comes from medicine, nursing, and occupational therapy. This guide extrapolates from cross-disciplinary evidence and clearly flags where that extrapolation occurs. The core finding (that LLMs work best as documentation copilots, not independent authors) is consistent across disciplines.

Does ASHA have a position on using AI in clinical practice?

ASHA has not issued a specific position statement on LLM use as of 2026. However, the ASHA Code of Ethics provides the framework: clinicians must maintain responsibility for clinical decisions, protect client confidentiality, and ensure documentation accuracy, all of which apply directly to AI-assisted workflows.

References

  1. ASHA. (2023). Code of ethics. asha.org/policy/et2016-00342
  2. HIPAA, 45 C.F.R. Parts 160, 164.
  3. FERPA, 20 U.S.C. § 1232g; 34 C.F.R. Part 99 (1974).
  4. IDEA, 20 U.S.C. § 1400 et seq. (2004).
  5. 34 C.F.R. § 300.320, IEP content requirements.
  6. ASHA. (2005). Evidence-based practice in communication disorders. asha.org/policy/ps2005-00221
  7. ASHA. (n.d.). Documentation. Practice Portal. asha.org/practice-portal
  8. 45 C.F.R. § 164.502(e), Business associate contracts.
  9. U.S. Dept. of Education. (2023). AI and the future of teaching and learning. tech.ed.gov/ai-future-of-teaching-and-learning
  10. Thirunavukarasu, A. J., et al. (2023). Large language models in medicine. Nature Medicine, 29, 1930–1940.
  11. Ayers, J. W., et al. (2023). Comparing physician and AI chatbot responses. JAMA Internal Medicine, 183(6), 589–596.
  12. Van Veen, D., et al. (2024). Adapted LLMs can outperform medical experts in clinical text summarization. Nature Medicine, 30, 1134–1142.
  13. Note on emerging evidence: SLP-specific claims are extrapolated from cross-disciplinary research where noted.
  14. ASHA. (n.d.). AAC. Practice Portal. asha.org/practice-portal/clinical-topics/aac

SLP/IO Assistant

Powered by Claude · No PHI accepted
AI assistant for clinical workflow support. Never enter student names, DOBs, or identifiable information.
Hi! I'm the SLP/IO assistant, an opinionated AI grounded in clinical practice. I can help with goal wording, note structure, ethical reflection, and navigating LLMs responsibly. What are you working on?