As the year draws to a close I want to start by thanking you, my audience. You make it possible for me to do what I love, and for that I am endlessly grateful. May we all prosper in 2026 and find comfort and joy, meaning and bearable challenges that grow us into who we were meant to become. Now, onto robots…
How Artificial Intelligence Is Changing the Mental Health Field
More and more I am confronted with conversations around how AI is rapidly changing our world, in particular, the mental health field. These talks range from AI note takers to chat bots that are leading some users to develop what’s being called “AI psychosis” – which is not an official diagnosis in the current DSM.
I am typically a slow learner and integrator of new technology. I have no plans of using the AI note takers that my EHR software markets to me when I log into their system to chart my progress notes. The most techy thing I do in note taking is using the speech to text function when summarizing what happens in a session.
But when these conversations turn to “AI assisted suicides” involving chatbots, for example, the stakes suddenly feel a lot higher and farther reaching. If you’re unnerved when you’ve heard these stories reported in the news, you’re not alone. I get that with new technology comes risks, including fatalities. But there is something particularly spooky about having lost your loved one to technology that is simulating human consciousness.
AI Safety Guardrails and Crisis Intervention Gaps
When ChatGPT was reached out to by the PBS Newshour about such risks they responded: “ChatGPT includes safeguards such as directing people to crisis helplines and referring them to real-world resources. While these safeguards work best in common, short exchanges, we’ve learned over time that they can sometimes become less reliable in long interactions where parts of the model’s safety training may degrade.”
This tech is still very new. And while it makes sense to me that people will naturally use these products to find solutions to their problems, it makes less sense that these AI models wouldn’t have been pressure tested to offer connection to direct care providers with a trained human professional – independent of “long interactions”. The cynic in me says, “Of course they didn’t prioritize that, it violates their ethos of keeping the user engaged at all costs… all costs.”
I said to my friend and fellow therapist, “If we possess the technology to flag a user’s content on social media to connect the situation to the FBI for safety reasons, say to prevent an attack, why can’t this same technology not flag a crisis hotline, crisis center, etc.?” Upon reflection, is this a privacy issue? When we use chatbots, are the terms and conditions that we are agreeing to written in a way as to protect our safety over our privacy? What do these tragic deaths say about how our society is currently organized?
Can AI Be Safely and Ethically Used for Emotional Support
I’m not entirely against using AI for emotional support either, to be clear. It could help people who have an established community in their lives as well as more at-risk folks who may not have as much access to emotional support – but the models would have to guard against the aforementioned risks.
I tested out Sunny, an IFS AI chatbot, and found the exchange to be useful. It was an artificial IFS therapist program that would prompt me, validate me, etc. The app came with a robust menu full of community (other human users), journaling sections, trailheads (an initial part of you that you meet that will lead you to other parts), parts maps, and more. I only engaged in a brief exchange as it was a free trial. While there is promise in these technologies, in my view, the lack of a human witness on the other side of the experience was palpable, to say the least.
AI-Guided Journaling and Self-Reflection
When the models have been ethically designed with robust guardrails, including improving on reality checking prompts, limits on emotional bonding without context, and crisis awareness protocols, there is a huge potential for self-help and even self-healing. I can see how AI prompting for journaling can guide a user towards self-reflective insights. After all, journaling has long held the power to heal, but we are in a brave new world when we are dialoguing with an AI companion.
What are the differences between seeking guidance from a program versus dialoguing with ourselves, or even engaging in two way prayer? I guess the answer to that question depends on if we grant AI consciousness status equivalent to the ancestor we pray to. This starts to get philosophical real quick, which is outside the scope of this article.
Even if the design ethicists succeed and the terms and conditions are clear and aligned with the society’s values, and the program says or does all the correct things, and there is a massively successful public education campaign on the healthy use and boundaries for AI emotional support, even then, there still appears to be a set up. There is no substitute for human connection – being witnessed by the presence of a loving being. The closest thing I can think of is not an AI companion, it is the voice on the other side of that two-way prayer. It is the ancestor that visits us in our dreams.
Can a Machine Host a Soul?
I consider myself to be a very open person; I cannot bring myself to see how this technology could possibly be developed to have the capacity to host a soul. And instantly another part of me says, “why not?” Again, I need to leave that for another day… perhaps another article, or maybe I can host someone on my podcast who is an expert on conscious technology. Come to think of it, my father-in-law might have some thoughts on the subject.
For now, may we stumble relatively intentionally into the new year and reach out to our neighbor, and offer the Real of human connection. It is a profound gift to give anyone, especially someone in need. May we delight in the mystery and be kind to others, and be kind to ourselves.
Robin S. Smith, MS, LCMFT is a Licensed Marriage and Family Therapist in clinical practice. As an MFT, he specializes in relationship issues for couples, families, and individuals, for improved quality of life. His areas of expertise include: transition to parenthood for new and expecting parents, infidelity, sex and intimacy issues, premarital counseling, and trauma. Robin has given talks to various groups including hospital administrators, graduate students, fellow psychotherapists, and child birth educators. He is the primary contributor to The Couple and Family Clinic Blog.


