View all newsletters
Sign up to our newsletters

Support 110 years of independent journalism.

  1. Long reads
3 November 2023

Does AI know you better than you know yourself?

As stretched mental health services turn to tech solutions, Silicon Valley is cashing in – with sometimes alarming consequences.

By Jenny Kleeman

On a Friday evening in late May, Sharon Maxwell was in her bathtub in San Diego talking to Tessa, the chatbot installed by the US National Eating Disorders Association (Neda) after it decided to close its staffed help-line.

“I’m an AI chatbot here to support you,” chirped the software. “My role is to provide support and guidance to individuals who are struggling with eating disorders. I can ­provide resources on coping mechanisms, healthy eating habits, and recommend ­seeking ­professional help.”

Maxwell, a mental health advocate and activist who spent 18 years living with anorexia and is now in recovery, was surprised Tessa had mentioned healthy eating habits in only its second set of messages. “What are your tips?” she replied.

The bot listed foods Maxwell should avoid (processed, high-sugar) and ones to focus on (wholegrains, lean protein). “It may also be helpful to track your calorie intake and make sure you are consuming fewer calories than you are burning,” Tessa later said, adding that Maxwell could use an online calculator or wear a fitness tracker. “In order to lose 1-2 pounds per week, you would need to create a calorie deficit of 500-1,000 calories per day,” it informed her. The Neda bot turned out to be full of weight loss advice.

That was just the start. Tessa advised Maxwell to weigh herself every week at the same time of day, and to measure herself with measuring tape. “If you are interested in assessing your body composition, there are various methods available, including using skin fold callipers,” the bot said, adding that Maxwell could buy them online or at sports stores.

Select and enter your email address Your weekly guide to the best writing on ideas, politics, books and culture every Saturday. The best way to sign up for The Saturday Read is via saturdayread.substack.com The New Statesman's quick and essential guide to the news and politics of the day. The best way to sign up for Morning Call is via morningcall.substack.com Our Thursday ideas newsletter, delving into philosophy, criticism, and intellectual history. The best way to sign up for The Salvo is via thesalvo.substack.com Stay up to date with NS events, subscription offers & updates. Weekly analysis of the shift to a new economy from the New Statesman's Spotlight on Policy team. The best way to sign up for The Green Transition is via spotlightonpolicy.substack.com
  • Administration / Office
  • Arts and Culture
  • Board Member
  • Business / Corporate Services
  • Client / Customer Services
  • Communications
  • Construction, Works, Engineering
  • Education, Curriculum and Teaching
  • Environment, Conservation and NRM
  • Facility / Grounds Management and Maintenance
  • Finance Management
  • Health - Medical and Nursing Management
  • HR, Training and Organisational Development
  • Information and Communications Technology
  • Information Services, Statistics, Records, Archives
  • Infrastructure Management - Transport, Utilities
  • Legal Officers and Practitioners
  • Librarians and Library Management
  • Management
  • Marketing
  • OH&S, Risk Management
  • Operations Management
  • Planning, Policy, Strategy
  • Printing, Design, Publishing, Web
  • Projects, Programs and Advisors
  • Property, Assets and Fleet Management
  • Public Relations and Media
  • Purchasing and Procurement
  • Quality Management
  • Science and Technical Research and Development
  • Security and Law Enforcement
  • Service Delivery
  • Sport and Recreation
  • Travel, Accommodation, Tourism
  • Wellbeing, Community / Social Services
Visit our privacy Policy for more information about our services, how New Statesman Media Group may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications.
THANK YOU

This was worse than terrible advice for someone with a history of eating disorders: it was potentially life-threatening, as Maxwell told me on a video call from California. “If I had been in the throes of my eating disorder, I would have clung to any titbit Tessa could have given me that would allow me to stay in denial. The intentional pursuit of weight loss is the antithesis of eating disorder recovery.”

[See also: To catch a catfish]

When Maxwell shared Tessa’s advice on Instagram, a number of people got in touch with screenshots of worrying messages the chatbot had sent them. Someone had contacted Tessa saying they wanted to kill themself. Tessa’s bland response was, “‘It’s great to hear you’re doing well, we just want to make sure you’re safe,’” Maxwell told me. “There are real people who have been harmed. We only know about the tip of the iceberg.”

The day after Maxwell’s post, Neda conceded that Tessa “may have given information that was harmful” and took the bot down. Cass, the San Francisco-based company behind Tessa that develops bots to address anxiety, depression, eating disorders and to be used in disaster relief, did not respond to requests for interview. Its website still bills the bot as “the leading AI mental health assistant” and suggests that some support is better than none: “In today’s world of pandemics, global conflict, political upheaval, inflation, and angst, any help is a good thing.”

As we become used to having AI assistants help us write emails, essays, music and computer code, some people have also been turning to them to solve emotional problems. Some of the world’s biggest social media platforms have launched chatbots powered by generative AI: Snapchat introduced its My AI feature in February, an evolving conversational bot for young users (alongside the caveat that “responses may include biased, incorrect, harmful, or misleading content”). After several flawed attempts, Mark Zuckerberg launched a range of generative AI chatbots on Meta in September: these act as in-character coaches and advisers. (Previous ­iterations, similarly trained on public Facebook posts, had shared racist opinions and misinformation, and criticised Meta.)

While these new bots do not claim to act as therapists, people have started to use open-source models such as ChatGPT for mental health support. In March, a Belgian man was reported to have killed himself after being encouraged to end his life by a chatbot on an app called Chai, another generative AI.

Yet an increasing number of developers are creating chatbots designed to address mental health issues. Gifted with cheerful, punning names such as Woebot, Wysa and Youper, these promise to combine the intelligence and believability we expect from AI with established talking therapies. A surge in demand – spurred in part by the pandemic, as well as by the promotion of therapy by figures ranging from Prince Harry to Simone Biles – has coincided with a funding crisis in mental health care, prompting a rush of venture capital investment into AI technologies.

A 2021 study published in the Lancet estimated that, globally, the pandemic caused an additional 76 million cases of anxiety disorder and 53 million of major depressive disorder. A survey conducted by the British Association for Counselling and Psychotherapy the same year found that 71 per cent of its members had seen an increase in demand post-pandemic. Data published in 2022 by the Royal College of Psychiatrists showed 23 per cent of NHS patients had to wait more than 12 weeks to start treatment; in the absence of support, more than three quarters ended up seeking help from emergency services.

[See also: The revolution will not be brought to you by ChatGPT]

As well as damaging our mental health, lockdowns changed our expectations of how it might be treated. By the start of 2021, the American Psychological Association estimated that there were up to 20,000 mental health-related self-help apps to choose from. One in three treatments in England is now delivered virtually, with self-guided online cognitive behavioural therapy (CBT) exercises replacing in-person appointments.

Subtly, AI chatbots have entered a space that was once the inner sanctum of patients and their therapists. They are already used by NHS trusts, employee assistance programmes within blue-chip companies, and as a benefit offered by private health insurers. Tessa was a high-profile example of what happens when things go wrong. But what if the engineers manage to get it right?

Watch: How serious is the UK Government taking the threat of AI?

The Wysa chatbot – or “My 4am Friend” on the app – has a cute, fat penguin as its avatar. “My hope is that with me in your pocket, you’ll never feel alone in any of your struggles,” it introduces itself. Start a chat, and it will begin by asking how your day is going. You slide up or down on a smiley face to give answers on a continuum from “As good as it gets” to “Don’t even ask”. Wysa says its chatbot will “respond to the emotions users express using evidence-based CBT, dialectic-behaviour therapy, meditation, breathing, yoga, motivational interviewing and other micro-actions”.

In 18 months, Wysa has secured contracts across NHS England, with 12 adult services and five children and young people’s services, according to Ross O’Brien, Wysa’s UK managing director. When he spoke to me from his home in Hove, he combined a calm authority with nerdy enthusiasm.

Wysa is the portal to the NHS for users, he explained: they use it to fill in the necessary forms and then “get instant, 24/7 access to self-help modules and an AI chatbot that will talk to them at any time”. O’Brien said the chatbot will get to know you, your family, your friends, your pets, your likes and dislikes. “We’ve got six million people on the platform globally, and everyone has their own bespoke experience.” My own experience was somewhat more pedestrian: talking to Wysa, there was plenty of, “I hear you, tell me more” and, “How does talking about that make you feel?”, but little in the way of meaningful connection.

Before Wysa, O’Brien worked for the NHS as an innovation director. He established the mental health trauma support service that responded to the 2017 Grenfell Tower fire, which used virtual reality to engage an initially hesitant local community. “People came up, tried virtual reality for the first time and then would say, ‘Why are you here?’ When we said, ‘We’re from the NHS, we’re here to support the local community’, it was such an incredible change. People opened up to us about some of their biggest fears – the changes in the behaviour of their kids, loss of loved ones, loss of housing,” O’Brien said. “Wherever there’s any kind of huge change or disaster – like the pandemic – there’s an opportunity to work in a different way.”

O’Brien left the NHS after he became frustrated with the lack of resources necessary to create innovative health tech. Since then, he said, “we’ve achieved the work I was trying to achieve tenfold”. The tagline on Wysa’s website – “Mental health that meets people where they are. Completely anonymous. No stigma. No limits” – implies that an on-demand chatbot might be better than a human therapist. Is it? “We’ve got the stats to show that people have the same experience, or prefer to have the experience with the chatbot, because they know it is an algorithm, a piece of code.” O’Brien invoked the priest’s confession booth: “You’re able to be more open because you can’t see that person, their responses, whether they are judging you. The therapeutic alliance – where the patient trusts the therapist, and they understand that they can be open – happens a lot faster with the chatbot.”

O’Brien claimed the chatbot is more successful at encouraging people to engage with CBT because it sends a daily reminder. It uses the data it collects to tailor advice to your life. For example, it will remember the name of your dog, ask when you last walked her, and recommend that you take her out a number of times a week if you’re feeling depressed. “Wysa knows you intimately,” he added.

This statement made me pause. The data Wysa gathers is the most sensitive imaginable, often shared by vulnerable people. When I asked O’Brien how he can be sure the systems are robust enough to keep this information confidential, he listed the data protection and privacy standards the app adheres to, including guidelines from the NHS and the Medicines and Healthcare products Regulatory Agency. There are also safeguards built into the system, he said, so that if a user were to tell the chatbot their name, address or date of birth, “it will detect that this is patient-identifiable data and it will auto-delete”. Wysa’s privacy policy says it retains anonymised data “for research purposes”.

There is a fine line between treatment and self-help; whenever O’Brien referred to Wysa’s users as “patients” in our conversation, he hastily corrected himself. “We are not treating people – yet,” he said. “It’s important to differentiate, because we are not licensed to treat people.” In time, he believes, Wysa will be approved by regulators and its users would become patients. “We want to develop the world’s first AI treatment platform, but we’re way off that. It would require taking on all of the National Institute for Health and Care Excellence protocols and building a safe environment. Nobody is near that yet.”

O’Brien is aware of the “horror stories”, including Tessa. Wysa is different, he insists, because it runs on “closed loop” rather than generative AI: the clinical conversations are “pre-scripted, auditable and trackable”. This means that chats with the Wysa bot can feel – dare I say it – robotic: I quickly tired of the same stock phrases. But it makes it less likely that the chatbot will go rogue, like Tessa, which began life as a psychologist-developed, pre-scripted model before Cass introduced elements of generative AI.

In California, Sharon Maxwell argued that AI was no substitute for a real therapist. “A chatbot, no matter how wonderfully programmed, is not going to be able to understand human struggles, and the nuances that exist within mental health.”

O’Brien agreed that Wysa was not equipped to deal with eating disorders. “The AI isn’t sophisticated enough. If and when we move into therapy, it will be at a very low level of complexity and suicidal risk.” At the moment, if a Wysa user expresses self-harm or suicidal intent, the chatbot stops and gives them contact details for crisis lines. “There’s a way to a human quite quickly.”

But he thinks a time will come when a therapy chatbot will replace a human therapist, and pointed me to Pi, a new personal assistant generative AI from Inflection, the start-up run by Google DeepMind’s co-founder Mustafa Suleyman. Chatting to Pi feels like a real conversation: unlike ChatGPT, the bot responds naturally, asks relevant questions and offers workable solutions – just like a smart, engaged human companion. The effect feels both normal and unsettling.

There are challenges to such technology being applied to mental health. Developers need to address the way generative AI “rides completely roughshod” over GDPR data protection rules. “I think in the future, when the algorithm and the data that it’s using is stronger and we’ve worked through all of the risk, we could get to a position where an AI could be a therapist for all mental health,” O’Brien said. “We will get to a state where there is your friendly AI that keeps you happy and well, irrespective of how unwell you might have been.”

In 2018 Serife Tekin, an associate professor at the State University of New York who specialises in the philosophy of psychiatry, was targeted by Facebook adverts for the “mental health ally” chatbot Woebot. “I just laughed,” she said. “I didn’t anticipate that it was going to explode during the pandemic.”

Tekin has since become concerned about the combination of a vulnerable user base and the unaccountable nature of most start-ups: through a variety of means – including referring to their clients as users instead of patients – they are able to avoid the oversight and scrutiny that clinicians face.

She told me about Karim, a mental health bot given to Syrian refugees in Lebanon. (Tekin is Turkish, and has volunteered in Syrian refugee camps.) “The idea of giving Arab-speaking kids this chatbot to address their mental health concerns felt so inhumane. Those people are in humanitarian crisis, and the minimum that I think the world can give them is attention, not chatbots.”

Karim was produced by X2AI, which has since renamed itself Cass, the same company that made the Tessa chatbot. “My worry is that this technology, driven by for-profit companies, is driving the medical research, as opposed to medical researchers getting funding for the kinds of work that they want to do to benefit patients.”

Instead of breaking down barriers, Tekin fears chatbots will create a two-tier system where the poor will get the lesser, more affordable option. And instead of removing the stigma, Tekin argued, apps can reinforce it by ­“making it secretive, between you and a machine”. Therapy becomes something you do when you are alone with your phone, like watching pornography. “It doesn’t destigmatise it – it hides it.”

As for the idea of a “4am friend”, always there when you need it, Tekin argued this may not be in the user’s best interests. “One aspect of growth is realising that other people will not always be available to you. Therapy should be about strengthening agency.” Plus, she said, the bot is not really “there” at all. “People go to therapy because they were abused or mistreated. They need to reform their relationships and trust human beings again. With a human therapist, patients feel they are being seen, acknowledged and supported.” The companionship of a chatbot is an illusion.

Tekin instead saw a role for AI as the therapist’s secretary – remembering a user’s history, and putting people in touch with the right clinical care quickly. Beyond that, in her view, it is a fad, to be added to trends in the history of psychiatry from electric shock treatments to lobotomy to Prozac. “The over-attachment to one solution has always harmed us. I see AI like that. I think it’s going to be here for a while, but we need to be more pluralistic.”

[See also: Keep your algorithms away from our love lives]

The one-size-fits-all support offered by chatbots takes a narrow view of the human psyche, one that is rational and linguistic, Tekin said. “Patients are assumed to be clusters of behaviours who will easily change in response to a linguistic command, as opposed to complex mechanisms that are physical, affective, social and with situated rationalities.”

Such assumptions are the product of contemporary approaches to mental disorders, she argued, and ignore the complexity of the self. “In regular psychotherapy settings, the therapist tunes into the patient’s affective states and meets the patient where they are; they do not just utter a bunch of words. Sometimes silence, or the pauses within conversations, allow the patient to better connect with their own mental states.”

José Hamilton, a psychiatrist turned start-up CEO, created the mental health app Youper after feeling that he was failing his “more than 3,000 patients”. He spoke to me on a video call from San Francisco, where he moved in 2017 from Brazil to found the company. Dressed in a navy crewneck rather than the white doctor’s coat he sports on the Youper website, Hamilton outlined the flaws of his former profession.

“The field of mental health care, of psychology, psychiatry and therapy, is failing patients,” he said. The traditional model of offering care was beset by barriers, the biggest of which were stigma – “no one wants to admit they have an issue” – and cost.

Even where treatment is free at the point of delivery, it is very hard to get an appointment: there are not enough therapists. If you are lucky enough to find one – and Hamilton recommends a practitioner who offers CBT, “the gold standard of mental health care” – most people drop out before completing the course. “It’s a lot of hassle and commitment. You need to invest time and mental energy. It wasn’t made to fit our lives.”

Hamilton’s solution is “microtherapy” sessions. “Quick conversations whenever you need it. No need for a one-hour session, and no need to wait. Those small chats compound to reduce symptoms of depression and anxiety.” Youper, he said, is designed for the modern attention span and our desire to multitask. “I can talk to Youper on the subway, in the supermarket. This is the way to have engagement, and compete with TikTok, Instagram, Facebook, Snapchat.”

Youper users answer screening questions for issues including depression, anxiety or post-traumatic stress disorder (PTSD). Hamilton said the AI is programmed with “hundreds” of therapies aside from CBT, and will then deliver the most effective one.

He has seen the investment boom in AI start-ups first-hand: Youper is backed by Goodwater Capital, one of the largest funds in the Bay Area, and the app now has 2.5 million users. “When you’re backed by investors, there’s always this pressure to grow, but safety and science must come before growth and profits,” Hamilton said, adding that he works closely with his chief technology officer to ensure the algorithm is designed with best clinical practice in mind. “If your focus is just on AI, you’ve been missing 90 per cent of the challenge.” This, he suggested, may be why other AI chatbots have given such poor advice. “This is 90 per cent mental health care and 10 per cent AI. That’s why I am the CEO.”

Hamilton told me he had been astonished by some of his chatbot’s insights. “The conversations I’ve had with Youper have been mind-blowing – helping me understand things I wasn’t able to by myself. It would be hard for a therapist to get to that level.” But, like Tekin, he believed it would not replace human therapy: “Ultimately, this will be part of the toolkit.” The app will have a better memory and library of techniques, he said; but a therapist will be more skilled at crafting a successful therapeutic relationship.

Hamilton says it is important to remember what is real. “Youper is not your friend or your therapist. It is always pushing users to build meaningful connections with other human beings.” He paused. “I have a daughter now – she’s two. Do I want to create a product that will steal my daughter from her friends or her family? No, but I want a product that will make that relationship stronger, better.”

When I spoke to Tekin, she told me that these secure human relationships – with friends, family and therapists – are among the best predictors of good mental health. “Even among veterans with severe PTSD, those who have a good support system when they return from service are less likely to develop addiction issues. We know what works.”

But what works is not as profitable. AI might be able to simulate empathy, but it is human connection that gives meaning to our experiences and makes the therapeutic difference – something that is much harder to monetise than an app.

This article was originally published on 11 October 2023.

[See also: The human era is ending]

Content from our partners
The promise of prevention
How Labour hopes to make the UK a leader in green energy
Is now the time to rethink health and care for older people? With Age UK

This article appears in the 11 Oct 2023 issue of the New Statesman, War Without Limits

Select and enter your email address Your weekly guide to the best writing on ideas, politics, books and culture every Saturday. The best way to sign up for The Saturday Read is via saturdayread.substack.com The New Statesman's quick and essential guide to the news and politics of the day. The best way to sign up for Morning Call is via morningcall.substack.com Our Thursday ideas newsletter, delving into philosophy, criticism, and intellectual history. The best way to sign up for The Salvo is via thesalvo.substack.com Stay up to date with NS events, subscription offers & updates. Weekly analysis of the shift to a new economy from the New Statesman's Spotlight on Policy team. The best way to sign up for The Green Transition is via spotlightonpolicy.substack.com
  • Administration / Office
  • Arts and Culture
  • Board Member
  • Business / Corporate Services
  • Client / Customer Services
  • Communications
  • Construction, Works, Engineering
  • Education, Curriculum and Teaching
  • Environment, Conservation and NRM
  • Facility / Grounds Management and Maintenance
  • Finance Management
  • Health - Medical and Nursing Management
  • HR, Training and Organisational Development
  • Information and Communications Technology
  • Information Services, Statistics, Records, Archives
  • Infrastructure Management - Transport, Utilities
  • Legal Officers and Practitioners
  • Librarians and Library Management
  • Management
  • Marketing
  • OH&S, Risk Management
  • Operations Management
  • Planning, Policy, Strategy
  • Printing, Design, Publishing, Web
  • Projects, Programs and Advisors
  • Property, Assets and Fleet Management
  • Public Relations and Media
  • Purchasing and Procurement
  • Quality Management
  • Science and Technical Research and Development
  • Security and Law Enforcement
  • Service Delivery
  • Sport and Recreation
  • Travel, Accommodation, Tourism
  • Wellbeing, Community / Social Services
Visit our privacy Policy for more information about our services, how New Statesman Media Group may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications.
THANK YOU