- Version
- Download 3
- File Size 971.43 KB
- File Count 1
- Create Date October 22, 2025
- Last Updated October 22, 2025
Friend, Therapist, or Threat? Exploring the Ethical and Clinical Implications of Large Language Models in Mental Health Practice
The advent of large language models (LLMs) such as ChatGPT has opened up possibilities as well as raised ethical issues regarding mental health practice. This research explores clinical and ethical implications of LLM integration into mental health treatment through qualitative documentary analysis and thematic synthesis of peer-reviewed literature, ethical codes, policy reports, and professional guidelines between 2018 and 2025. This research aimed to (1) outline current and possible uses of LLMs by clinicians and clients, (2) evaluate ethical concerns of privacy, consent, disinformation, and therapeutic boundaries, and (3) draw pragmatic recommendations for ethical uptake. Dominant themes included client-led digital companionship, clinician administrative use, consent ambiguity, therapeutic boundary erosion, misinformation risks, regulatory gaps, and integration frameworks. The conclusions suggest that LLMs are already changing the way individuals interact with and are provided with mental health care, often beyond professional monitoring or observable regulatory oversight. Clinicians cautiously use these technologies for non-clinical purposes, while ethical risks, such as hallucinated responses and data misuse, largely go unmitigated.

Master’s Thesis Title:Friend, Therapist, or Threat? Exploring the Ethical and Clinical Implications of Large Language Models in Mental Health Practice Submitted by:Jamieson Cobbs EIU6027075 Program:Master of Science in Mental Health Psychology European International University - Paris Date:August, 2025 EIU-PARIS
