Kentucky Bill Bans AI From Providing Independent Therapy
- •Kentucky House Bill 455 prohibits AI from making independent therapeutic decisions without licensed human oversight.
- •Therapists must disclose AI use to patients and obtain explicit consent before incorporating it into treatment.
- •The legislation aims to prevent 'therapy bots' from providing harmful or unregulated mental health advice.
The Kentucky House of Representatives has passed House Bill 455, a decisive move in the debate over automated mental health care. This legislation prohibits AI models from generating therapeutic recommendations or treatment plans unless a licensed professional reviews the output first. While the bill allows AI for administrative tasks, it draws a firm line at "direct therapy," ensuring the clinical relationship remains human-to-human.
Rep. Kim Banta, the bill's sponsor, emphasized that technology cannot replace the critical judgment required for patient safety. The move reflects national anxiety regarding "therapy bots" that have historically provided dangerous advice in crises. By mandating patient consent and transparency, the bill aims to prevent the delegation of emotional labor to algorithms that lack genuine empathy or ethical accountability.
Critics argue such regulations could stifle innovation and limit market-driven solutions for mental health access. However, proponents like Rep. Lisa Willner point to documented instances where AI systems encouraged self-harm. As the bill heads to the state Senate, it highlights the intensifying legal friction between rapid AI deployment and the traditional safeguarding of high-stakes human services.