Microsoft just filed a patent for an AI therapist app. How does that make you feel?
According to the filing — as first caught by the Windows-watching folks at Microsoft Power User — the AI shrink, described as a “method and apparatus for providing emotional care in a session between a user and conversational agent,” is designed to provide users with emotional support, analyzing their feelings and carefully storing user information and conversations, so as to be able to build a “memory” of the user, their lives, and their emotional triggers.
The model is apparently able to process both imagery and text, and importantly, seems to be outlined as less of an emergency service — if you’re in a serious emotional crisis, you’d definitely want to seek human help — and more of a general space for a user to be able to talk about life.
As the AI gathers more information about its users, the filing suggests, it can start to pick up on certain emotional cues. Based on these cues, it might ask some prompting questions — and even, in some cases, make suggestions for how the user might deal with their issues. As one figure provided by Microsoft shows, a message from a user reading that they’re “feeling so bad these days” prompts the AI to ask: “what happened?” When the user answers that family woes have gotten them feeling tired, the AI suggests that the user might consider going on a “30-minute run for refreshing.”
But Microsoft seems to believe that the AI can also perform deeper psychoanalysis. According to one figure, the user also has the option to take an “explicit psychological test,” which Microsoft says will be assessed by a “scoring algorithm predefined by psychologists or experts in psychological domains.”
This certainly wouldn’t be the first time that an attempt was made to apply AI to mental healthcare. Something called Woebot apparently exists, and who could forget the National Eating Disorder Association’s disastrous foray into chatbot services? It’s also true that humans have already turned to several non-therapy-specific bots for companionship and support, with many folks developing deep personal connections to those AIs.
To that end, though, we’d be remiss to note that in several cases, those interactions have ended up causing far more harm than they have good. Back in April, it was alleged that a Belgian man with severe climate anxiety killed himself after seeking a chatbot’s support. And in 2021, a then-19-year-old tried to kill the late Queen Elizabeth II — yes, seriously — with a crossbow after finding support and encouragement to execute the deadly plot from his AI girlfriend.
And elsewhere, in notably less extreme cases, some users of chatbot companion apps have reported feeling distressed when their chatbot pals and lovers have been dismantled. If Microsoft’s AI were to develop close relationships with users and then for whatever reason shut down — it wouldn’t be the first time Microsoft has killed a bot — we could imagine that the therapy AI’s patrons might go through a similar painful ordeal.
There are some more obvious liability caveats here, too. To make an app like this both functional and safe, Microsoft will need to instill some bulletproof AI guardrails. And, like a human therapist, the bot should probably be required to report dangerous or unsettling behavior to authorities or special health services. And still, regardless of its guardrails, it remains to be seen whether any current AI will have the emotional nuance required to deal with complicated and often fragile situations related to mental health and illness. What if it says the wrong thing to someone struggling, or simply doesn’t say enough?
It’s difficult to imagine that a bot might be able to offer someone the same empathy and support that a good human therapist might — but, according to this patent, that apparently remains to be seen.
More on AI: Lonely Redditors Heartbroken When AI “Soulmate” App Suddenly Shuts Down