Ethical Decision Tree
A structured decision-making guide for the gray areas of AI use in SLP practice.
Not every AI question has a clear answer. But most of them can be worked through systematically. This guide gives you a structured way to think through the gray areas, so you spend less time wondering and more time making defensible decisions.
Use the pre-flight checklist first, then walk through the decision tree for any task you’re unsure about. The gray areas section covers the questions we hear most often.
Before You Start
Before using any AI tool for a clinical task, run through these five questions:
- Do I understand what this tool does with my input? If you don’t know whether your text is stored, used for training, or accessible to the provider, stop and find out.
- Is there a BAA in place between my organization and this provider? If no, treat the tool as public. No PHI, no exceptions.
- Am I using this to replace my clinical judgment, or to support it? If you plan to copy-paste output into a document without reviewing it, you are not using AI responsibly.
- Would I be comfortable explaining this use to a parent, supervisor, or auditor? If the answer is “I’d rather they didn’t know,” reconsider.
- Does my employer or district have a policy on AI use? If yes, that policy supersedes anything here. If no, you’re still bound by HIPAA, FERPA, and ASHA’s Code of Ethics .
Decision Tree: Can I Use AI for This Task?
Work through these decision points in order:
- Does this task involve client or student data?
- No → Does the task require clinical expertise to evaluate the output?
- No → Proceed with standard workflow. (e.g., brainstorming generic activities, learning about a topic)
- Yes → Proceed, but review all output against your clinical knowledge before use.
- Yes → Does the data contain PHI or FERPA-protected information?
- No (fully de-identified, no names, DOBs, schools, IDs) → Proceed with de-identified workflow. Review output before placing in any record.
- Yes → Is there a signed BAA with this AI provider?
- No → Do not use AI for this task. De-identify first, then re-enter the tree.
- Yes → Is the task documentation, goal writing, or report generation?
- Yes → Proceed with disclosure. Review all output. Document that AI was used if required by your setting.
- No → Is the task clinical decision-making (diagnosis, eligibility, treatment planning)?
- Yes → Do not use AI for this task. These require independent professional judgment .
- No → Proceed with caution. Evaluate whether the task genuinely benefits from AI assistance.
- No → Does the task require clinical expertise to evaluate the output?
Common Gray Areas
”Can I use AI to brainstorm therapy activities?”
Yes. This is one of the safest uses. You are not entering client data; you are asking for ideas. Just make sure the activities you select are clinically appropriate for your client’s goals and abilities.
”Can I paste my raw session notes into ChatGPT?”
Not into a public tool. Raw session notes almost always contain PHI: client names, session dates, specific behaviors tied to an identifiable person. If you want AI help structuring your notes, de-identify first or use a tool with a BAA in place .
”Do I need to tell parents I used AI to write their child’s IEP goals?”
There is no universal legal requirement yet, but transparency is the safer path. If your district has a disclosure policy, follow it. If not, consider: would a parent feel misled if they learned AI was involved? Disclosure builds trust. Secrecy erodes it.
”Can a grad student use AI to draft their clinical reports?”
This depends on the program’s academic integrity policy and the clinical supervisor’s expectations. If AI is permitted, the student must still demonstrate they understand the clinical reasoning behind every sentence. A draft is a starting point, not a final product.
”My district bought an AI tool. Does that make it safe?”
Not automatically. A purchased tool is only as safe as its implementation. Ask: Is there a BAA? What data does it collect? Where is it stored? Who has access? “District-approved” means someone vetted it, but verify that vetting included privacy review.
”Can I use AI to write a letter of medical necessity?”
You can use AI to help structure and draft the letter, but the clinical rationale must come from you. The letter is a professional assertion that a service is medically necessary, and that assertion carries your name, your license, and your liability.
”Is it ethical to use AI if it means I see more patients?”
Efficiency is not inherently unethical. But if AI-assisted documentation leads to less individualized care, that is a problem. The question is not whether you see more patients, but whether each patient still gets your full clinical attention.
”What if the AI-generated note is better than what I would have written?”
“Better” is doing a lot of work in that sentence. If the note is more polished but less specific to the client, it is not better. If it is more organized and equally accurate, that is a genuine improvement. Your job is to evaluate which one it is, every time.
The Bright Lines
Some things are never acceptable, regardless of context, tool, or time pressure:
- Never enter identifiable PHI into a tool without a BAA. No names, no DOBs, no school names, no student IDs. No exceptions .
- Never let AI make a clinical determination. Diagnosis, eligibility decisions, and treatment recommendations require professional judgment. AI can inform, but it cannot decide.
- Never submit AI output without reviewing it. If your name is on the document, you are responsible for every word in it.
- Never misrepresent AI-generated work as solely your own when disclosure is required by your employer, program, or licensing board.
- Never use AI to fabricate data, invent session details, or document services that did not occur.
Pair With
- PHI Safety, for the de-identification checklist referenced in the decision tree
- Core Principles, for the ethical framework underlying these decisions
- Model & Tool Comparison, for BAA availability by provider