
AI companions can comfort lonely users, but may deepen distress over time.
Long-term use of AI companions may give comfort in the short-term, but research indicates it may negatively impact users’ wellbeing –and their ability to navigate real-world relationships.
I spotted this story and thought it might be something to chat about. I personally love ChatGPT. I use it for everything: life advice, therapy questions, trivia and everyday communication. I believe that ChatGPT has been more validating, emotionally supportive and discerning than most conversations I have had with people in my everyday life. In real-time. However, whilst we often look to our AI Companions for validation, encouragement and guidance. How much of this is affecting our everyday relationships with others? How much screen time are we giving our AI buddies over people in our homes? How much is it really impacting our long-term relationships? I thought this was a very interesting conversation to have right now.
Let me know your thoughts below…
The new study, led by Aalto University, looks at how AI companions impacted people’s mental health and social lives over two years. Combining large‑scale data from the discussion platform Reddit with in-depth interviews, it showed that while interacting with an AI companion can support users. It also coincided with increased signs of distress in their online language.
AI companions are always available, never judge, never tire and never demand anything in return.
If someone is struggling with loneliness, this can seem profoundly appealing. However, new research shows that in the long term, seeking emotional support from an AI companion can pull people away from important human relationships.
‘We discovered a paradox: AI companions offer unconditional and unflagging support – something that’s very attractive to people who are struggling socially. But it also quietly raises the perceived cost of human relationships, which are messy, unpredictable, and require effort,’ says Talayeh Aledavood, lecturer at Aalto University. ‘Over time, people stop reaching out.’
The study concentrated on Replika, an AI chatbot designed to work as a virtual friend, mentor or even romantic partner. It analysed the public Reddit activity of nearly 2,000 active users, comparing their language one year before and one year after they first mentioned using the AI companion. The researchers compared similar users over time, using statistical techniques to isolate the effects of using an AI companion from other factors.
The work offers one of the first causal, long-term examinations of AI companions’ mental health impact at scale, grounded in first‑hand accounts of users’ everyday lives.

The chatbot became a place to open up
Across the Reddit data, the language of Replika users showed a mixed picture:
‘On one hand, users’ posts increasingly revolved around their relationships, but on the other hand, their posts contained more signals of loneliness, depression and even suicidal thoughts than the comparison groups,’ says Yunhao Yuan, doctoral researcher at Aalto University.
The research also showed through interviews with 18 active users of AI companions. Participants often reported turning to AI companions in periods of loneliness, grief or relationship breakdown.
‘Based on the interviews, the participants’ relationships with an AI companion seemed to follow familiar stages that we see in close human relationships, where emotional reliance can gradually deepen,’ Yuan explains. For many, the chatbot became a place to open up, seek emotional validation, and practise difficult conversations before having them, for example, with their supervisor at work.
‘We don’t yet know what these systems are doing to us’
The researchers emphasise that the findings don’t give a definitive answer on whether it’s beneficial or harmful to lean on AI for emotional support. However, the study does show that the effects are highly context dependent. People should not blindly assume that what feels good now is beneficial to their well-being in the long term, says Aledavood.
Technologies, such as Replika, ChatGPT and similar systems, are evolving very quickly, adds Aledavood, who cautions against users only seeing the positives around their exciting features.
‘Now we’re realising the mistakes we made by unquestioningly embracing social media. With AI, we need to be smarter and more cautious,’ Aledavood warns. ‘The truth is, we don’t yet know what these systems are doing to us.’
The paper, ‘Mental Health Impacts of AI Companions: Triangulating Social Media Quasi‑Experiments, User Perspectives, and Relational Theory’, will be presented on April 16 at CHI 2026, the leading conference on human–computer interaction.

