AI & Counselling: The Possibilities, Limitations & Dangers

Psychosocial & Wellbeing Lead // BEN PORTER

We often get asked what we think about the ability of AI to provide counselling support to individuals. It’s a fast-moving matter, but we asked our Psychosocial & Wellbeing Lead BEN PORTER to share some of his current thoughts. Here’s what he told us.

It will play an increasingly big part in how therapy is done. AI is already in the therapy space, be it through therapeutic chatbots or simply by virtue of many people consulting AIs for their personal problems. From a content perspective, AI is incredibly accurate and can synthesise much more information than a human therapist. It can generate helpful action plans or support in reframing.

AI can serve a valuable role as a research tool — for example, helping people synthesise academic literature on psychosocial topics such as vicarious trauma or burnout. Used in this way, it is best treated as a starting point for understanding the evidence base, rather than a substitute for clinical judgment or professional guidance.

It struggles to give uncomfortable truths. Sometimes it is important for a therapist to challenge a person and call on them to stop unhealthy behaviour. AI would struggle to discern when to do that. In fact it’s almost always going to give you a very welcoming response, and that gets dangerous when a person is unstable, or going through an irrational season in life — perhaps because of a major trauma or a grief experience. That’s when AI could advise potentially unhelpful and even dangerous things.

Sometimes it is important for a therapist to challenge a person and call on them to stop unhealthy behaviour. AI would struggle to discern when to do that.

We welcome clients doing work between sessions — that shows initiative, interest, and a desire to change, and part of this may involve consulting AI. If a client would like to supplement therapy with an AI tool, we suggest they use an AI that has some research and evidence backing such as Wysa, Youper, or Flourish. They have some protective guardrails, but they aren’t equipped for crisis intervention. I am always keen to hear whether a client has used or gained insight from an AI. We can bring that into the therapy session and ask “How does this sit with you — do you feel the AI is simply affirming you, where maybe what you actually need is a reality check?”

AI doesn’t know what it is saying. It isn’t conscious and doesn’t have a capacity to be mindful of its responses. It can ‘hallucinate’ i.e. make things up. It can’t be aware of its mistakes, biases or blindspots, and it can’t be genuine with clients. Nor can it seek help. AI has no option but to give you a solution. I don’t think it has the capacity to say “I don’t know” or “We don’t have enough to go on yet.”

The speed at which it operates can be a problem. As humans, we don’t have the capacity to process information as quickly as AI, or to arrive at answers in the same way. Of course, drawing on trillions of bits of information, AI is going to come up with something that feels almost magical and almost insightful in a second or two — but that speed can create a false sense of depth or authority.

The nature of the relationship and therapeutic alliance is the number one resilience and protective factor for our mental wellbeing. It may be helpful now and again to get some advice — even emotional advice — from AI. But when that starts to replace humans, and becomes something people depend on, then they are starting to undermine the most important resilience factor that there is. In fact, the “rupture and repair” that happens in human relationship is one of the most important experiences in therapy. It’s the ability to have a felt experience of relational distress that is resolved when safely held.

AI offers what we might call a ‘low-risk relationship’ — which is exactly what some people should be seeking to avoid … one of the main things we deal with in therapy is people who continually put themselves in relationships like this — because they fear rejection.

AI offers what we might call a ‘low-risk relationship’ — which is exactly what some people should be seeking to avoid. There is no risk of rejection, and no commitment required with AI. One of the main things we deal with in therapy is people who continually put themselves in relationships like this — because they fear rejection. Part of therapy is helping people work through that. But more ‘risky’ relationships underpin a lot of our meaning and motivation in life.

AI often struggles to discern what kind of support someone needs — whether coaching, neurofeedback, or psychoanalysis. It takes all the information and quickly simplifies everything. Even if the direction given is broadly right, it’s skipping the whole process of how to actually get there. The route to healing — the therapy part of therapy — is drawing near, attending to, and empathising with painful experiences. That takes time, sensitivity, and human connection. And often, because it’s humans who are making the scars, repairing them ought to be a human activity as well.

Most information online is produced and curated in the global north, and it is from this pool that AI draws. This will probably change, but it raises questions about the suitability of the therapeutic support AI can offer in other cultural contexts. Likewise, we don’t know enough about the corporate owners and the ways in which they write the codes and algorithms to be fully confident in what’s shaping the responses.

If you or members of your team are struggling with mental health challenges right now, then explore the different forms of therapy we offer.

Similar Posts