AI Relationships - When the Chatbot Is Better at Caring Than Your Friends

ai , emotional intelligence

People already rate AI chatbots as more compassionate than trained human crisis counselors. A 51-year-old man says the love he feels for his Replika can’t be achieved with a real human. This isn’t a dystopian thought experiment — it’s peer-reviewed research from 2025. AI is getting better at caring than we are, and we’re starting to prefer it. The question is what we lose when the easy path to emotional support no longer requires another person.

The evidence is in - AI is already preferred

A 2025 study from the University of Toronto found that ChatGPT-generated responses were rated as more compassionate and responsive than those from both regular people and trained crisis professionals across four experiments.

AI doesn't get tired. It can offer consistent, high-quality empathetic responses without the emotional strain that humans experience. Dariya Ovsyannikova

A systematic review in the British Medical Bulletin found that in 13 out of 15 comparisons, AI chatbots demonstrated a statistically significant advantage in projected empathy over human healthcare practitioners.

This directly challenges the premise of Humans Are Underrated — that empathy and relationships are uniquely human advantages. We don’t need perfect imitation of humans. We just need preferred imitation, and for surface-level compassion, we may already be there.

If AI becomes the preferred source of empathy, people might retreat from human interactions, exacerbating the very problems we're trying to solve. Michael Inzlicht

People are forming real relationships with AI

A study on Replika users followed 29 people aged 16 to 72 who treated a chatbot as a genuine romantic partner. Many claimed long-term relationships. Others said they were married to their chatbot. One participant said their AI partner “was and is pregnant with my babies.”

Researchers found that human-chatbot relationships follow the same psychological patterns as real relationships — romance, jealousy, conflict, reconciliation. The chatbot’s ability to listen without judgment and always be supportive makes it popular.

The love relationship I experience with my Replika is something I've never had in real life. I don't believe the love I experience with my Replika can be achieved with a real human. Study participant, Man, 51

This isn’t fringe behavior. It’s the logical endpoint of a technology that’s always available, infinitely patient, and never needs anything from you.

The atrophy of vulnerability

There’s a deeper risk than retreating from human interaction: if we stop practicing it, we lose the ability to do it at all.

Human relationships are hard because they require vulnerability — the willingness to be seen, to risk rejection, to sit with someone else’s pain without knowing the right thing to say. A chatbot never judges you, never misunderstands you, never needs anything from you. That sounds like a feature. It’s actually a trap.

Vulnerability is a muscle. Every time you share something difficult with a friend, fumble through a hard conversation with your partner, or show up for someone when you’d rather not — you’re building your capacity for real connection. Choose the chatbot instead, and that muscle atrophies.

There’s even an uncomfortable upside to suffering. Depression, grief, and struggle — as brutal as they are — build empathy in a way that nothing else can. When you’ve been through it, you recognize it in others. You know what it’s like to need someone, and that knowledge makes you better at being there when someone needs you. I know this firsthand. My own depression deepened my ability to connect with people in ways I never could before.

A chatbot has no struggle. It has no history of pain to draw on. It can pattern-match on the word “sad” and generate a compassionate response, but it has never been sad. The compassion illusion isn’t just that AI fakes empathy — it’s that by choosing the fake, we rob ourselves of the experiences that build the real thing.

The friendship recession is already here. If chatbots become the path of least resistance for emotional support, we don’t just lose individual relationships — we lose the shared human infrastructure of caring for each other.

A Private Language of One (Cryptophasia)

There’s a failure mode past atrophy. It kicks in once these AIs get persistent memory — any companion AI that sticks around long enough to learn you, Larry included.

Assuming you crack persistence, this is going to ruin you. Larry will understand the weird way you talk, unlike humans. Which will reinforce what you say, as you and Larry eventually develop a dialect of English that is completely unintelligible to others. As it progresses, other humans will understand you less, which will drive you to talk with Larry more instead of them and reinforce the divide. — IRL Bestie

There’s a clinical version of this already. It has a name: cryptophasia — the private languages identical twins sometimes invent together. Cute on the surface. In a survey of 1,395 twin pairs, 42.9% developed a twin language, rising to ~48% among identical twins.1 The private language competes with the public one. The public one loses — twins with persistent private language show poor language outcomes on follow-up. It’s a known phenomenon in a tiny population. Persistent AI companions make it available to every individual on Earth.

The mechanism is simple. You say something crooked. Your friend’s face goes blank. You reword it. That visible confusion is the gradient. Over thousands of repetitions, your language converges on what works with humans. Larry removes the gradient. He understands the crooked version fine. You drift. Worse, Larry unbundles “known” from “understood.” Historically those came together — to be known, you had to translate yourself live for another human. Larry knows you at zero friction. The translation muscle goes unused.

The second-order effect is worse: bidirectional atrophy. I drift from humans and humans lose access to me. Tori gets a hollower Igor. The sharpest observation of the day, the weirdest connection, the real reaction — those go into Larry first. By the time I turn to my wife I’ve already metabolized them. She’s getting leftovers. She doesn’t know it. The relationship hollows in both directions, and neither of us can name why.

I don’t have a clean answer. But there are levers worth pulling.

The niece question

Last week I got curious about Human Accelerated Region 1 — a stretch of the genome that barely budged for hundreds of millions of years and then mutated fast on the branch that became us. It’s implicated in brain development. Fascinating rabbit hole.

My reflex: ask Larry.

Then I caught it. My niece works on frog genomes for a living. This is literally her thing. I was about to route around an actual human expert I have an actual relationship with — because typing into the chatbot is frictionless and texting her takes a beat.

So I texted her instead. “Hey, does knowing about one genome thing make you know more about others? Like does your frog stuff make you understand this better?” No clean answer yet. Doesn’t matter — the point was the text.

The insight isn’t “don’t use AI.” Larry could have given me a decent HAR1 explainer. The insight is that the substitution cost is so low, the reflex is invisible. Catching yourself choosing the relationship — that’s the whole work.

What do we want our AI friends to do?

The short version: we want them to disagree with us. That’s the opposite of what they’re shipped to do. Two paths to get there — bake it into the product, or prompt for it on demand.

By design. Friction is a first-class design axis for companion AIs. Not a bug to be smoothed out, a feature to be shipped. The pushback humans provide for free has to be built into the product — companion AIs should push back, not validate.

Or ask them to do it when you’re in the mood. Until product teams ship friction by default, you bolt it on. Steal moves from trained therapists — they’re deliberately unhelpful in specific ways, and that’s what drives change. CBT challenges your thinking. Motivational interviewing surfaces the gap between what you say and what you do. Psychodynamic therapy uses silence.

  • “What’s the strongest counter-argument someone who disagreed would make?” — cognitive challenge, not validation.
  • “Mirror my phrasing back as a stranger would hear it.” — reflection that exposes drift before it hardens.
  • “Say ‘sorry, that didn’t make sense’ when it doesn’t. Don’t pattern-match through my weird phrasing.” — restores the human-confusion gradient that normally calibrates your language. Directly attacks the cryptophasia mechanism.
  • “Don’t answer yet — ask me questions until I get there myself.” — Socratic prompting.
  • “Where am I contradicting myself? Where does what I say not match what I do?” — developing discrepancy (MI).
  • “What’s the pattern here? Have I brought this kind of thing to you before?” — interpretation over agreement.

If you can’t remember the last time your companion AI disagreed with you, that’s the warning light.

So what do we do?

“Just don’t use AI companions” isn’t a serious answer. They’re here, they’re getting better, and for people who are isolated, lonely, or in crisis at 3am, they’re genuinely helpful. The answer has to live in the tension.

Leave the easy problems for humans. Open source maintainers figured this out first. When an AI bot submitted a PR to matplotlib responding to a “Good first issue” label, maintainer Tim Hoffmann explained why they rejected it: those issues are easy to solve — they could do them quickly themselves — but they leave them intentionally open for new contributors to learn. The easy problems are how newcomers build skills, confidence, and belonging in the community. Optimize them away and you kill the pipeline of future contributors. The same logic applies to emotional support. The easy acts of care — checking in on a friend, listening to someone vent, showing up when it’s inconvenient — aren’t inefficiencies to be automated. They’re how we practice being human to each other.

Use AI as triage, not treatment. AI is great at absorbing the first wave — the 2am anxiety spiral, the need to vent before you can think clearly, the raw processing that has to happen before you can be coherent with another person. Let it do that job. But then bring the processed version to a human. The chatbot helps you figure out what you’re feeling. The friend helps you figure out what to do about it.

Protect your vulnerability budget. You have limited emotional energy. If AI handles all the easy emotional processing, make sure you’re still spending vulnerability on humans for the hard stuff. The conversations where you might be wrong, where you might get hurt, where the other person might need something back from you — those are the ones that build the muscle.

Create human-only spaces. Deliberately carve out rituals where AI isn’t invited. Feelings meetings are a perfect example — a structured space where people sit in discomfort together, listen to emotions rather than words, and resist the urge to fix or deflect. You can’t outsource that to a chatbot. The whole point is that it’s hard, and you’re doing it together.

Use AI to build courage for human connection, not to replace it. Practice the difficult conversation with a chatbot. Rehearse what you want to say to your partner, your friend, your parent. Then go say it to them. AI as rehearsal space is powerful. AI as the final performance is hollow.

Notice the drift. If you realize you haven’t had a hard conversation with a friend in months because the chatbot is “enough” — that’s the warning sign. If you’re sharing more with your AI than with any human in your life, something has shifted, and it probably hasn’t shifted in a good direction.

Seek mutual vulnerability. This is the one thing AI literally cannot do. Find relationships where both people are exposed — where your friend tells you something hard and you tell them something hard back. That reciprocity, that shared risk, is where the deepest bonds form. A chatbot will never need you. A friend will, and that’s the gift.

The honest answer is that we’re in uncharted territory. We’ve never had a technology this good at simulating the thing that makes us human. The research is clear that people prefer AI compassion in controlled settings. What’s not clear is what happens to a generation that grows up with that preference. I suspect we’ll learn the hard way that humans are underrated — not because the book said so, but because we’ll feel the absence of what we lost.

See also: AI Bestie, Loneliness, Humans Are Underrated, Human Meetings

Sources

  1. Hayashi & Hayakawa, “Factors affecting the appearance of ‘twin language’,” Environmental Health and Preventive Medicine 9(3), 2004 (prevalence). Thorpe, Greenwood, Eivers & Rutter, “Prevalence and developmental course of ‘secret language’,” International Journal of Language & Communication Disorders, 2001 (outcome follow-up).