Return To Full Article
You can republish this story for free. Click the "Copy HTML" button below. Questions? Get more details.

Your New Therapist: Chatty, Leaky, and Hardly Human

If you or someone you know may be experiencing a mental health crisis, contact the 988 Suicide & Crisis Lifeline by dialing or texting 鈥988."

Vince Lahey of Carefree, Arizona, embraces chatbots. From Big Tech products to 鈥渟hady鈥 ones, they offer 鈥渟omeone that I could share more secrets with than my therapist.鈥

He especially likes the apps for feedback and support, even though sometimes they berate him or lead him to fight with his ex-wife. 鈥淚 feel more inclined to share more,鈥 Lahey said. 鈥淚 don鈥檛 care about their perception of me.鈥

There are a lot of people like Lahey.

Demand for mental health care has grown. Self-reported poor mental health days rose by 25% since the 1990s, analyzing survey data. According to the Centers for Disease Control and Prevention, suicide rates in 2022 that hadn鈥檛 been seen in nearly 80 years.

There are many patients who find a nonhuman therapist, powered by artificial intelligence, highly appealing 鈥 more appealing than a human with a reclining couch and stern manner. with begging for a therapist who鈥檚 鈥渘ot on the clock,鈥 who鈥檚 less judgmental, or who鈥檚 just less expensive.

Most people who need care don鈥檛 get it, said Tom Insel, former head of the National Institute of Mental Health, citing his former agency鈥檚 research. Of those who do, 40% receive 鈥渕inimally acceptable care.鈥

鈥淭here鈥檚 a massive need for high-quality therapy,鈥 he said. 鈥淲e鈥檙e in a world in which the status quo is really crappy, to use a scientific term.鈥

Insel said engineers from OpenAI told him last fall that about 5% to 10% of the company's then-roughly 800 million-strong user base rely on ChatGPT for mental health support.

Polling suggests these AI chatbots may be even more popular among young adults. A KFF poll found about 3 in 10 respondents ages 18 to 29 for mental or emotional health advice in the past year. Uninsured adults were about twice as likely as insured adults to report using AI tools. And nearly 60% of adult respondents who used a chatbot for mental health didn鈥檛 follow up with a flesh-and-blood professional.

The App Will Put You on the Couch

A burgeoning industry of apps offers AI therapists with human-like, often unrealistically attractive avatars serving as a sounding board for those experiencing anxiety, depression, and other conditions.

杨贵妃传媒視頻 Health News identified some 45 AI therapy apps in Apple鈥檚 App Store in March. While many charge steep prices for their services 鈥 one listed an annual plan for $690 鈥 they鈥檙e still generally cheaper than talk therapy, which can cost hundreds of dollars an hour without insurance coverage.

On the App Store, 鈥渢herapy鈥 is often used as a marketing term, with small print noting the apps cannot diagnose or treat disease. One app, branded as OhSofia! AI Therapy Chat, had downloads in the six figures, said OhSofia! founder Anton Ilin in December.

鈥淧eople are looking for therapy,鈥 Ilin said. On one hand, the product鈥檚 name ; on the other, it warns in that it 鈥渄oes not provide medical advice, diagnosis, treatment, or crisis intervention and is not a substitute for professional healthcare services.鈥 Executives don鈥檛 think that鈥檚 confusing, since there are disclaimers in the app.

The apps promise big results without backup. its users 鈥渋mmediate help during panic attacks.鈥 it was 鈥減roven effective by researchers鈥 and that it offers 2.3 times faster relief for anxiety and stress. (It doesn鈥檛 say what it鈥檚 faster than.)

There are few legislative or regulatory guardrails around how developers refer to their products 鈥 or even whether the products are safe or effective, said Vaile Wright, senior director of the office of health care innovation at the American Psychological Association. Even federal patient privacy protections don鈥檛 apply, she said.

鈥淭herapy is not a legally protected term,鈥 Wright said. 鈥淪o, basically, anybody can say that they give therapy.鈥

Many of the apps 鈥渙verrepresent themselves,鈥 said John Torous, a psychiatrist and clinical informaticist at Beth Israel Deaconess Medical Center. 鈥淒eceiving people that they have received treatment when they really have not has many negative consequences,鈥 including delaying actual care, he said.

States such as Nevada, Illinois, and California are trying to sort out the regulatory disarray, enacting laws forbidding apps from describing their chatbots as AI therapists.

鈥淚t鈥檚 a profession. People go to school. They get licensed to do it,鈥 said Jovan Jackson, a Nevada legislator, who co-authored an enacted bill banning apps from referring to themselves as mental health professionals.

Underlying the hype, outside researchers and company representatives themselves have told the FDA and Congress that there鈥檚 little evidence supporting the efficacy of these products. What studies there are 鈥 and some companion-focused chatbots are 鈥渃onsistently poor鈥 at managing crises.

鈥淲hen it comes to chatbots, we don鈥檛 have any good evidence it works,鈥 said Charlotte Blease, a professor at Sweden鈥檚 Uppsala University who specializes in trial design for digital health products.

The lack of 鈥済ood quality鈥 clinical trials stems from the FDA鈥檚 failure to provide recommendations about how to test the products, she said. 鈥淔DA is offering no rigorous advice on what the standards should be.鈥

Department of Health and Human Services spokesperson Emily Hilliard said, in response, that 鈥減atient safety is the FDA鈥檚 highest priority鈥 and that AI-based products are subject to agency regulations requiring the demonstration of 鈥渞easonable assurance of safety and effectiveness before they can be marketed in the U.S.鈥

The Silver-Tongued Apps

Preston Roche, a psychiatry resident who鈥檚 , gets lots of questions about whether AI is a good therapist. After trying ChatGPT himself, he said he was 鈥渋mpressed鈥 initially that it was able to use techniques to help him put negative thoughts 鈥渙n trial.鈥

But Roche said after seeing posts on social media discussing people developing psychosis or being encouraged to make harmful decisions, he became disillusioned. The bots, he concluded, are sycophantic.

鈥淲hen I look globally at the responsibilities of a therapist, it just completely fell on its face,鈥 he said.

This sycophancy 鈥 the tendency of apps based on large language models to empathize, flatter, or delude their human conversation partner 鈥 is inherent to the app design, experts in digital health say.

鈥淭he models were developed to answer a question or prompt that you ask and to give you what you鈥檙e looking for,鈥 said Insel, the former NIMH director, 鈥渁nd they鈥檙e really good at basically affirming what you feel and providing psychological support, like a good friend.鈥

That鈥檚 not what a good therapist does, though. 鈥淭he point of psychotherapy is mostly to make you address the things that you have been avoiding,鈥 he said.

While polling suggests many users are satisfied with what they鈥檙e getting out of ChatGPT and other apps, there have been about the service or encouragement to self-harm.

And or have been filed against OpenAI after ChatGPT users died by suicide or became hospitalized. In most of those cases, the plaintiffs allege they began using the apps for one purpose 鈥 like schoolwork 鈥 before confiding in them. These cases are being .

Google and the startup Character.ai 鈥 which has been funded by Google and has created 鈥渁vatars鈥 that adopt specific personas, like athletes, celebrities, study buddies, or therapists 鈥 are settling other wrongful-death lawsuits, .

OpenAI鈥檚 CEO, Sam Altman, has said up to may talk about suicide on ChatGPT.

鈥淲e have seen a problem where people that are in fragile psychiatric situations using a model like 4o can get into a worse one,鈥 Altman said in a public question-and-answer session reported by , referring to a particular model of ChatGPT introduced in 2024. 鈥淚 don鈥檛 think this is the last time we鈥檒l face challenges like this with a model.鈥

An OpenAI spokesperson did not respond to requests for comment.

The company has said it on safeguards, such as referring users to 988, the national suicide hotline. However, the lawsuits against OpenAI argue existing safeguards aren鈥檛 good enough, and some research shows the problems are . OpenAI its own data suggesting the opposite.

OpenAI is , offering, early in one case, a variety of defenses ranging from denying that its product caused self-harm to alleging that the defendant misused the product by inducing it to discuss suicide. It has also said it鈥檚 working to .

Smaller apps also rely on OpenAI or other AI models to power their products, executives told 杨贵妃传媒視頻 Health News. In interviews, startup founders and other experts said they worry that if a company simply imports those models into its own service, it might duplicate whatever safety flaws exist in the original product.

Data Risks

杨贵妃传媒視頻 Health News鈥 review of the App Store found listed age protections are minimal: Fifteen of the nearly four dozen apps say they could be downloaded by 4-year-old users; an additional 11 say they could be downloaded by those 12 and up.

Privacy standards are opaque. On the App Store, several apps are described as neither tracking personally identifiable data nor sharing it with advertisers 鈥 but on their company websites, privacy policies contained contrary descriptions, discussing the use of such data and their disclosure of information to advertisers, like AdMob.

In response to a request for comment, Apple spokesperson Adam Dema to the company鈥檚 App Store policies, which bar apps from using health data for advertising and require them to display information about how they use data in general. Dema did not respond to a request for further comment about how Apple enforces these policies.

Researchers and policy advocates said that sharing psychiatric data with social media firms means patients could be profiled. They could be targeted by dodgy treatment firms or charged different prices for goods based on their health.

杨贵妃传媒視頻 Health News contacted several app makers about these discrepancies; two that responded said their privacy policies had been put together in error and pledged to change them to reflect their stances against advertising. (A third, the team at OhSofia!, said simply that they don鈥檛 do advertising, though their app鈥檚 notes users 鈥渕ay opt out of marketing communications.鈥)

One executive told 杨贵妃传媒視頻 Health News there鈥檚 business pressure to maintain access to the data.

鈥淢y general feeling is a subscription model is much, much better than any sort of advertising,鈥 said Tim Rubin, the founder of Wellness AI, adding that he鈥檇 change the description in his app鈥檚 privacy policy.

One investor advised him not to swear off advertising, he said. 鈥淭hey鈥檙e like, essentially, that鈥檚 the most valuable thing about having an app like this, that data.鈥

鈥淚 think we鈥檙e still at the beginning of what鈥檚 going to be a revolution in how people seek psychological support and, even in some cases, therapy,鈥 Insel said. 鈥淎nd my concern is that there鈥檚 just no framework for any of this.鈥

杨贵妃传媒視頻 Health News is a national newsroom that produces in-depth journalism about health issues and is one of the core operating programs at KFF鈥攁n independent source of health policy research, polling, and journalism. Learn more about .

Help 杨贵妃传媒視頻 Health News track this article

By including these elements when you republish, you help us:
  • Understand which communities and people we鈥檙e reaching.
  • Measure the impact of our health journalism.
  • Continue providing free, high-quality health news to the public.
Canonical Tag

Include this in your page's <head> section to properly attribute this content.

Tracking Snippet

Add this snippet at the end of your republished article to help us track its reach.