Antipatterns
Concrete before-and-after examples of bad LLM use in clinical practice. Each antipattern shows what went wrong, why the model produced it, and exactly how to fix it.
The Polished but Unmeasurable Goal
When AI-generated goals sound professional but fail the most basic test: can you actually measure this?
Generic AI Voice
When LLM output replaces your clinical observations with polished but interchangeable language that could describe any client.
The Clinic-Only Goal
When a goal measures performance in therapy but says nothing about the real world where the skill actually matters.
Copy-Paste PHI
When a clinician pastes real patient or student names, dates of birth, and diagnoses directly into a public LLM.
Hallucinated Test Scores
When the LLM invents standardized test scores, percentile ranks, or normative data that you never provided.
The Overcorrection
When repeated AI-assisted revisions strip a note of its original clinical observations until it is technically polished but clinically empty.
One-Size-Fits-All Goals
When the same prompt produces nearly identical goals for every student, differing only in the name at the top.
The Scope Creep
When the LLM generates recommendations outside the SLP's scope of practice, including medication suggestions, educational placements, or medical diagnoses.