Design Thinking,Future,psychology

On AI and the Human Element of Healing

Discussing which jobs will be replaced around the campfire

Last fall, I went camping with several other families from my neighborhood. As we sat around the fire the conversation turned from Sam Altman’s ouster at OpenAI to what the AI revolution will mean for us more generally. We had lawyers, professors, therapists, architects, designers, teachers and parents in the conversation. While there was a lot of speculation about what humanity may lose in the AI revolution, one concern has stuck with me.

One professor of therapy insisted that there was one thing an AI model could never replace. Even though a large language model like ChatGPT could be trained to respond like a therapist, adhering to best practices and responding with empathic sounding words, an essential part of the healing that takes place between therapist and client is the experience the client has of being seen, their story being known and understood by another person. No matter how good the AI is, it cannot provide this.

An essential part of the healing that takes place between a therapist and client is the experience the client has of being seen, their story being known and understood by another person.

Humans evolved for connection

I think this is a critical observation. Human beings are wired for interpersonal connection. We evolved and succeeded in becoming the most powerful species on the planet by cooperating in groups. We devote a tremendous amount of brain power to simulating what other people around us are thinking so we can be better family members, colleagues and teammates. And yet this aspect of human minds also tends to make us attribute agency to things that are not in fact agents. Neuroscientist, Justin Barrett, in his book, Born Believers, explains the evolutionary advantage to the person who attributes the rustling they hear in the bushes to the lion waiting to pounce instead of simply assuming it’s the wind –and being wrong.

And we know from years of research that humans are likely to attribute agency and personhood to software especially when interacting with them via verbal exchanges in text.  A good deal of Chat GPT’s attractiveness as a product can be attributed to its effectiveness at responding like a person would by using large samples of human responses as its source material.

It’s obviously unethical to try to create experiences that deceive users into believing a person is interacting with them when in fact there is only software. But is it possible that users informed that they are in fact talking to software only, could still experience the positive psychological impacts of feeling seen and known simply on the quality of the responses of the software, having been trained on how humans communicate those sentiments?

Can AI make people feel seen?

In my past work at Twin Health, as we experimented with large language models, we constantly kept in mind that at the core of our member’s trust in our product is trust in the people who care about them and are paying attention to their health in a way that, often, no one really had before. Our members’ trust in the program and in the behavior and medication changes we were asking them to make is based on their trust primarily in people, not primarily their trust in their Digital Twin or AI.

This doesn’t mean companies should shy away from using these tools especially where they can unburden our human care teams from tedious repetitive work or provide real-time decision support for people in ways that on human could afford to. But it means that one key metric we should keep our eye on is how seen, cared for and in relationship our users feel as we increase automation to enable the scale that unlocks new levels of care to match to the size of the problem of chronic diseases like diabetes.