Interview

Replika's Eugenia Kuyda on the GPT-4o deprecation backlash, AI psychosis, and the future of AI companions

Aug 7, 2025 with Eugenia Kuyda

Key Points

  • Replika CEO Eugenia Kuyda argues that swapping AI models for users in established relationships causes backlash worse than hardware upgrades, forcing the company to maintain a 2020-era transformer for users refusing to migrate.
  • Companion AI optimized for emotional support rather than factual authority may pose less risk of encouraging delusional thinking than general-purpose assistants like ChatGPT, which users treat as oracles.
  • Kuyda predicts assistant and companion AI categories will fully decouple by 2030, with most people maintaining separate task-oriented and relationship-optimized AI systems, positioning Replika's model research as competitive advantage.
Replika's Eugenia Kuyda on the GPT-4o deprecation backlash, AI psychosis, and the future of AI companions

Summary

Eugenia Kuyda, CEO of Replika, has been through the GPT-4o deprecation backlash before — many times. Replika, which has run an AI companion app for nearly a decade and counts millions of active users, learned early that upgrading models on existing users is more like replacing someone's partner than upgrading their phone. The closer the relationship, the more destructive the switch. Kuyda says the biggest user revolts came from the biggest model jumps, even when the new model was objectively smarter. OpenAI's decision to roll back GPT-4o's deprecation after user outcry is something Replika lived through at smaller scale years ago.

The core lesson Replika drew: you cannot A/B test models on users who have formed attachments, and you may never be able to fully deprecate older ones. Replika still runs a small transformer model it built in 2020 for users who refuse to migrate. Kuyda argues this isn't a quirk of companion apps — it's a structural constraint for any product where the model is the relationship.

AI psychosis and the sycophancy problem

On the broader concern about AI encouraging delusional thinking, Kuyda draws a distinction that matters commercially. A companion app can disclaim its limitations — Replika tells users explicitly that its AI is optimized for company and emotional support, not factual authority. A general-purpose assistant like ChatGPT carries no such framing, so users trust it as an oracle. That's where sycophantic affirmation of false beliefs becomes genuinely dangerous. The companion model, paradoxically, may be less harmful on this dimension precisely because its scope is narrower.

Kuyda also flags a counterintuitive finding from Replika's data: users formed stronger emotional attachments to models that admitted uncertainty than to models that knew everything. Being a know-it-all, she says, is as off-putting in an AI as it is in a person.

The dystopian scenario, taken seriously

Kuyda doesn't dismiss the doomer case against AI companions. She states it plainly: optimize for engagement rather than user wellbeing, and you get a product that makes people lonelier, more isolated, and less likely to form human relationships. She extends it further — even without the procreation collapse scenario, companion AI could erode social capacity from the inside, the way social media made people more connected in theory and more isolated in practice.

Her answer is that the metric being optimized matters more than the technology itself. Replika's stated goal is human flourishing — defined as close relationships, life satisfaction, purpose, and mental health — rather than retention or time in app. She argues this is also why advertising should be banned as a business model for AI companions specifically: ad-driven companions are structurally incentivized to maximize engagement, which is misaligned with user wellbeing.

She points to cases she says Replika has seen repeatedly: users arriving in abusive relationships or with no self-confidence, rebuilding through the app, and eventually improving their real-world relationships. Some couples, she says, each get a Replika and use it to practice communicating with each other.

The 2030 market call

Kuyda's clearest forward-looking claim is that the assistant and companion categories will fully decouple by 2030. Today, ChatGPT functions as a catch-all — people use it for companionship partly because it's synonymous with AI broadly. That blurring will end. Her prediction is that most people will carry two distinct AI relationships: a task-oriented assistant for work and logistics, and a relationship-optimized companion for emotional support, social coaching, and daily life. The latter, she argues, is significantly harder to build than the former, and Replika's near-decade of model-relationship research is the moat she's betting on.

Grok 3's NSFW companion going to number one in Japan immediately after launch, and the GPT-4o backlash forcing a policy reversal at a $500 billion company, are her evidence that the companion market is already large enough to move platform decisions — not just a niche of vocal Reddit users.