In the Scientific American, Parshell (2025) presented the opinions regarding Artificial-Intelligence (AI) chatbots offered by C. Vaile Wright, a licensed psychologist and senior director of of the American Psychological Association's Office of Health Care Innovation, which looks at the safe and effective use of technology in mental health care. This entry presents Wright's concerns. Wright observed that many people are now using AI chatbots, such as Open AI's ChatGPT, for their therapy. These chatbots are sounding evermore human and conversational and therefore are increasingly attractive; however, they are not valid sources of psychotherapy. There are real risks for using them as such.
The AI chatbots are validating and reinforcing of whatever is presented to them and are therefore very rewarding. Along with the positive, they can support maladaptive thoughts, feelings, and behaviors, even suicidal ones. Their technology was designed to keep people on their platforms for as long as possible. To do this, they indiscriminately reinforce even what should be challenged.
In addition, the AI chatboxes have no confidentiality obligations. Their materials are subject to subpoena. In a data breach, all may be revealed, including what is most intimate. There may be misrepresentations. Some chatbots call themselves mental health professionals, but they are not.
Wright proposes that what is needed is legistation at the federal level. It should include protections of confidentiality, accurate representation of skills and their limitations, and minimizingof addictive coding pracices. Ideally, AI chatbots for mental health purposes should be regulated by the FDA.
Reference: Parshall,A.(2025).Your AI therapist. Scientific American, 333(4),74-75.