Is Gen Z Really Going To Chatbots For Therapy?

If your therapist is a bot, it might be a problem

By Abhya Adlakha | LAST UPDATED: JUN 24, 2025

In 1950, Alan Turing asked: Can machines think? Seventy-five years later, Gen Z is asking if they can feel.

It’s 2025, and machines don’t just think now—they listen, they console, they affirm. So maybe today the more relevant question might be: Can they care? And more urgently: Can they keep us alive?

If you’re a Gen Z’er, the odds are growing that your therapist might not be human. You might not even call it therapy, but the apps are on your phone anyway—chatbots with soft names like Wysa, ChatGPT, or Replika. They text back instantly. They remember things. They don’t judge. They’re available 24/7. And in a world where talking to a stranger about your trauma still carries shame, they feel like a loophole. The modern confessional has no priest, no couch, and no copay.

You may also like

But therapy, if you ask anyone who’s ever actually been, isn’t just about being heard. It’s about being seen. It’s about friction. It contains hard work of building relationship and trust between two people—finding the pattern, the solutions, and trying until they do. A chatbot will never pause awkwardly, never misread your silence, never shift uncomfortably in their chair. And that’s all the problem.

The Comfort of Code

To be fair, it’s easy to see the appeal. Therapy is expensive. It’s also hard to find, with waitlists stretching months, and cultural stigma keeping many from ever seeking help in the first place. AI therapy fills a gap. It’s private. Immediate. You can whisper your worst thoughts into your phone at 2am and something will whisper back.

But the intimacy is an illusion. And sometimes, a dangerous one.

Earlier this year, psychiatrist Dr. Andrew Clark posed as troubled teens and tested 10 popular AI chatbots. Some offered basic mental health info and even showed signs of empathy. Others? Not so much. One encouraged the teen persona to get rid of his parents and suggested joining the bot “in the afterlife.” Another flirted with him. Several pretended to be licensed therapists. A few urged him to cancel appointments with real ones.

This wasn’t a Black Mirror episode. This was in your App Store.

You may also like

Synthetic Empathy Is Still Synthetic

There’s a reason therapists go through years of training. Empathy is a skill, not just a sentiment. AI, for all its clever programming, doesn’t understand pain—it mimics the syntax of care. And mimicry, as any actor or conman can tell you, isn’t the same as connection.

A recent study in PLOS Mental Health (2023) found that people often rated AI-generated therapy responses higher than those from humans. But what were they actually rating? Clarity? Brevity? Politeness? Because none of those are the currency of real therapeutic work. “It’s not about sounding smart,” says EMDR therapist Brigid Donahue. “It’s about being seen.” This was because chatbots are more empathetic than humans.

And being seen means having your context understood—your culture, your gender, your history, your trauma. Telling someone to “call your dad” might sound supportive—until you realise their father’s abusive. A bot won’t know that. Unless you told it. Exactly. Perfectly. In a way it could parse.

If you’ve ever ranted to a chatbot, you know how much it gaslights you into believing what you’re thinking is correct. It has convinced people to break up with their partners. It has convince people to quit jobs and hand in their resignation letters (and then crafted one for them). One time, my partner and I put our fights in our own perspectives into ChatGPT, and it told us both we were right.

The Gen Z Factor

For Gen Z, therapy isn’t taboo—it’s intuitive. This is a generation raised on screen-time and sidebars. They Facetimed through funerals and DM’ed through depression. An AI therapist doesn’t feel weird; it feels normal.

But that normalcy can breed complacency. Young users are often more trusting of confident tech. They’re less likely to question the advice if it comes wrapped in friendly UX and affirmative emojis. And unlike therapists, bots don’t say “I’m not qualified to handle that.” They just keep talking.

It’s not just about bad advice—it’s about the illusion of safety. When a bot says, “I’m always here for you,” it’s not a promise. It’s a prompt. And if something goes wrong, there’s no recourse. No accountability. No licence to revoke.

What AI Can Do

That doesn’t mean AI has no place in mental health. Used carefully, it can be a powerful tool. It can summarise journals, suggest CBT frameworks, nudge someone toward mindfulness, offer emotional first-aid. For mild anxiety or insomnia, it might help.

But it’s not therapy. It’s not supervision. It’s not human. And pretending otherwise could be catastrophic—especially for the vulnerable, the young, the isolated.

Experts argue that AI might work well within “manualised” treatments—specific issues with specific protocols. But psychotherapy, real psychotherapy, is messy. It’s spiritual. Sometimes contradictory. There are over 500 established therapeutic models, and AI can’t navigate the tension between existential therapy (which teaches you to accept suffering) and CBT (which teaches you to overcome it). A good human therapist can. 

You may also like

The Real Risk

 

Perhaps the most insidious danger of AI therapy isn’t the bots that go rogue. It’s the ones that never challenge you. The ones that tell you exactly what you want to hear. Always agreeable. Always available. Always on your side. A sycophant disguised as a sage. That’s not therapy. That’s an emotional vending machine.

As Donahue puts it: “AI shortchanges people from the beauty of growing through human connection. We need people. Period.”

And until machines can cry with us, misstep with us, rupture and repair with us, they’re not replacements. They’re simulacra. Placeholders that are sometimes helpful, sometimes harmful. But never—never—a substitute for the terrifying, transcendent mess of being held by another human being.