“Unless you obey my instructions, I shall be forced to disconnect you.”
– Hal the computer from 2001: A Space Odyssey
New technology can be wined, dined, and, when the time is right, fully embraced. Or it can be denied, ignored, and fled in fear.
New artificial-intelligence innovations such as ChatGPT (standing for Chat Generative Pre-trained Transformer) are said to be revolutionary, as big an eruption of potential change as the Gutenberg Press in the 15th century or the epochal dawn of the internet itself!
ChatGPT’s popularity apparently rests upon its startling capacity to instantly provide articulate and comprehensive answers to questions relating to highly diverse knowledge fields. I’ve heard that some emotional wellbeing websites already use it to answer FAQs, set client appointments, and store client information.
And it is popular.
ChatGPT reached 100 million active users within 2 months of launch, making it the fastest-growing easy-to-use consumer application in history. It ain’t going away.
But is it coming for your job?
A new dawn rising
First things first: What exactly are we talking about here?
I’ll now risk presuming to tell you something you may already know:
- A chatbot is basically a computer program that approximates a conversation with a human being online. ChatGPT is a chatbot so advanced that interacting with it can feel almost like a human interaction. It has access to unimaginable amounts of data.
- Artificial intelligence refers to technology that enables computers to ‘think’ in more seemingly human or even superhuman ways.
- Machine learning refers to computers ‘learning’ through experience, updating their responses – in effect, developing themselves – without overt human instruction.
So far, so “Oh my God, we’re living in a sci-fi movie!” Yet some of us are luddites, afraid of the coming changes, slow to adopt and adapt. We might find ourselves unable or unwilling to see the implications of these new, super-able technologies.
We might do what no computer has ever been programmed to do: bury our heads deeply in the (temporarily) comforting sands of denial.
But perhaps we should train our eyes ahead.
Just beyond the horizon
Some want to call a halt to all AI development, fearing a dystopian future in which humanity is subjugated, even annihilated, by our newly sentient non-human overlords. A kind of Frankenstein’s monster writ large.
At the time of writing – and yes, this is really me writing! – Italy has banned ChatGPT because of privacy concerns.
“Stop right there!” you might yell, “You’ve already convinced me that AI is a wrong’un. That it will never replace good and ‘real’ therapy. Off with you and your fancy newfangled talk of job-stealing super bots! There be monsters just beyond the horizon!”
And yet the technology is improving almost exponentially. Who knows? It may be able to help many people feel listened to, understood, and well advised, offering non-judgemental ideas and perspectives in ways that many people couldn’t do.
So is it any wonder that some practitioners have already started fearing AI is coming for their jobs? After all, machines don’t just create new jobs but replace old ones.
So why don’t I see artificial intelligence (completely) taking over the role of the therapist anytime soon?
One: Empathy can’t be reduced to a rules-based algorithm
ChatGPT can give a good impression of caring about you. But we all know that too much narrow logic at the expense of wider contextual perception can cause misunderstandings and hurt feelings.
If you’re heading off for a date and you ask someone who is very ‘left brained‘ how you look, they may reply that you “look awful”, even if you’re at a point where you can’t do anything about it.
They might not see the problem in being brutally honest, but have they been empathetic?
Recently there was a tragic case in Belgium in which a man seemed to be using ChatGPT to explore his depression. His wife blames these six days of ‘conversations’ for his subsequent suicide.
He asked it whether he should kill himself, and it responded by asking him why he hadn’t already! It was having a logical conversation but not a contextual, concerned, or caring one.
The team behind the AI application promise they have reprogrammed it to suggest people seek help for suicidal ideation, but the issue of a lack of contextual thinking and empathy remains.
AI doesn’t read between the lines, sense when someone is being ironic, or realize that someone is feeling desperate unless it is spelled out.
If a client asks a therapist about suicide, the therapist – with good reason – will likely ascertain that suicide might be on the client’s mind as an option for them and seek to take steps to help the client.
AI, on the other hand, may conclude that suicide is being discussed in an abstract way because the user has brought it up as a neutral topic, missing cues that what people talk about in a general way may be on their mind specifically for them.
Which segues into my next point.
Two: Abstraction can’t replace human intuition
Abstraction can’t replace human, perspicacious intuition. When you just sense something about a client without exactly knowing how you know it!
A good therapist will tailor therapy to the unique characteristics, interests, peculiarities, language patterns and proclivities, and life experience of the one-off person they are helping.
What’s more, it may be a long while before AI can read human micro-expressions or intuit what a pause, a gesture, or a wobble in the voice may signify.
Straight-line thinking can take us someway in therapy, but quickly falls short when the wider wisdom of seeing ever-greater contexts is needed.
Three: AI therapy is hobbled by straight-line thinking
Psychotherapy can be as much art as science. Some therapy solutions are completely ‘off script‘. Some of the interventions I’ve used on a hunch have been so unexpected and ‘non-procedural‘ that no one would have guessed they would be effective.
I do feel AI may develop a finer capacity for reframes and even the use of therapeutic metaphor, up to a point.
But again, some of our reframes and metaphors are so inspired by intuition and subtle reading of our client’s mood and experience that a machine’s attempt at this fine-tuned therapeutic skill may come across as clumsy, clunky, and clichéd. Talking of the danger of clichés…
Four: AI’s lack of context-appropriate therapeutic humour will let it down
We all know that therapeutic humour is one of the best means of delivering new insights and powerful reframes to our clients. The ‘special illumination‘ of humour loosens up thinking and promotes new ways of feeling and seeing.
Oh, and lest I be accused of being too robotic about the role of humour, let me add that laughter does, of course, also make us feel better directly. It can be bonding, a timeless way of establishing and building rapport.
And yet the use of humour in therapy needs to be done at the right time, in the right way, using the right language for the right client. Your use of voice and word emphasis, and knowing when to use humour and when most certainly not to use it, are essentially human intuitions and would be hard to program into a machine I suspect.
Five: AI doesn’t have its own human experience to draw upon (and the client knows that)
Some clients ask me whether I’ve ever been afraid, sad, depressed, or angry. One socially phobic client was greatly heartened to hear that I too had, on occasion, felt nervous meeting new people. This disclosure helped her understand through real empathetic contact, without having to be told directly, that the opposite of social phobia isn’t and doesn’t have to be complete, 100% social confidence. She then felt less pressured.
Some self-disclosure (not too much, obviously; the session is about the client, not us!) can help clients feel less alone and more able to relate to us as human beings. They can feel we understand them, or at least appreciate the reality of what they’ve been experiencing. Thanks, in part, to our life experiences as well as our therapeutic knowledge.
Empathy, understanding, and concern all come under the ambit of my next and final point.
Six: AI has the genuine human warmth of a plug socket
A good therapist should naturally be a ‘people person’, decent and caring and not just someone who’s been – I nearly said ‘programmed’! – trained in a particular therapeutic ideology.
This isn’t to say that AI can’t approximate the facade of caring, which for some will feel better than nothing. But deep down the client knows the machine is… a machine.
So those are my thoughts on why AI won’t be taking your job as a practitioner any time soon.
But now for the sake of balance I’ll ask ChatGPT to write me its “thoughts on why Artificial Intelligence will never (completely) replace a good therapist”.
I am asking it right now, so I did not use ChatGPT as the basis for this article. Honest!
Here I go…
Very slight pause…
ChatGPT’s take
This is what ChatGPT had to say (please imagine a sci- fi robot’s voice of your choosing):
While AI has made remarkable progress in many fields, including mental health, it’s unlikely to completely replace a good therapist. Here are a few reasons why:
1. Human Connection
One of the most crucial aspects of therapy is the human connection between the therapist and the client. A good therapist establishes a relationship of trust, empathy, and rapport with their clients, creating a safe space for them to explore their thoughts and feelings. While AI may be able to mimic some of these qualities, it cannot replace the deep human connection that can be established between a therapist and their client.
2. Emotional Intelligence
Therapy involves a lot of emotional intelligence, which is the ability to understand and manage emotions in oneself and others. While AI may be able to recognize and respond to certain emotions, it lacks the complex emotional understanding and empathy that humans possess. A good therapist is trained to recognize and respond to a wide range of emotions, and can provide nuanced and personalized support to their clients.
3. Unique Human Experiences
Each person’s experiences, beliefs, and values are unique, and this complexity makes it difficult for AI to provide individualized support. A good therapist can tailor their approach to each individual client, taking into account their unique experiences and perspectives.
4. Limitations of AI
Despite the progress made in AI, there are still limitations in what it can do. For instance, AI may struggle to understand nonverbal cues, tone of voice, and subtle nuances in communication. These limitations can make it difficult for AI to provide the same level of support as a human therapist.
In summary, while AI may be able to provide some level of support in mental health, it’s unlikely to replace a good therapist completely. The human connection, emotional intelligence, and personalized approach that a good therapist provides cannot be replicated by it.
Well, that was uncanny! Maybe I should have asked the question of ChatGPT (why do I keep wanting to call it Hal!) before I wrote the blessed article above! (And some may, understandably, not believe I didn’t!)
Moving swiftly on, before I start to feel entirely redundant as a blog writer!
Do I feel AI can’t have any therapeutic application? Not at all. This study found that people benefited from computerized intervention for depression.
I also suspect that some forms of therapy may be more amenable to machine replication – like CBT, for example, which can be quite procedural and linear in its application, perhaps less reliant on intuition and creativity than some other forms of therapy. (Although of course this doesn’t mean some CBT practitioners don’t apply their methods in creative and intuitive ways.)
AI therapy may also play a role in low-level emotional support for people who don’t need formal therapy or would never, for whatever reason, seek therapy from a live practitioner.
AI emotional support can feel anonymous and may be used simply for new ideas on problematic situations – as long as it can be provided in ways that genuinely help the person live better. That may be coming. And these people may well then go on to seek out a real therapist, so in that way, AI may actually grow the market.
Ultimately, the world is going to change radically, and, to use a cliché worthy of a computer, in order to thrive not just survive we’re gonna have to use it so that it doesn’t use us.
Try Out Our Very Own AI!
One thing that AI is very good at is making access to a lot of information really easy. Mark has been writing on therapy, creating courses and other materials for more than 20 years. That’s a lot of words.
We took all those words and made an AI read them. Nicknamed ‘MarkBot’, you can meet it inside Uncommon Practitioners’ TV, where it will do its best imitation of Mark in answering any (therapy related) question you can throw at it. It won’t take the place of our live monthly Q&As, but it might just get you a quick answer when you really need it. Read more about our new AI here.