Published on 31 October 2025
Share Share  Share

AI companions are breaking hearts. Policy hasn’t noticed

Millions now look to AI for connection. That shift in behaviour also needs a shift in mindset by policymakers.

“What’s something you know about me that no one else does?” The question went viral in a TikTok video last year. A young girl was speaking into her phone, her voice soft, her response eerily affectionate. Until recently, questions like this were reserved for a partner, a sibling, or perhaps a therapist. Today, they are increasingly being posed to chatGPT. For the 2.3 million people who viewed the video, it would have seemed unsurprising.

AI companions aren’t human. But increasingly people treat them as if they are. That shift in behaviour demands a shift in mindset from policymakers.

The quiet rise of everyday AI intimacy

Designed for speeding up useful tasks, AI now serves as a daily companion for millions. A 2025 Harvard Business Review study found that the top three use cases for generative AI are companionship, therapy, and daily life guidance, outpacing traditional productivity tasks.

Character.AI, a platform where users build and chat with personalized AI personas, says that it handles 20,000 queries per second. That is about one fifth the volume of Google Search. The phenomenon extends to mainstream platforms like ChatGPT.

“People talk about the most personal sh*t in their lives to ChatGPT,” OpenAI CEO Sam Altman said on Theo Von’s podcast. “Young people especially use it as a therapist, a life coach”, he added.

On social media, users describe how AI helped them through panic attacks and breakups:

“ChatGPT singlehandedly has made me a less anxious person when it comes to dating, when it comes to health, when it comes to career,” mentioned user @christinazozulya in a TikTok video.

The growing number of users speaks volumes. Emerging evidence from a new survey I launched suggests that in the UK (n = 750) nearly 1 in 20 report relying on it for emotional support. About 1 in 5 has, at least once, taken relationship or dating advice from an AI chatbot over advice provided by a therapist or a trusted friend. 1 in 50 reports that they chose spending time with an AI companion over a real person.

AI as a shoulder to lean on

The appeal of an always-on confidant is clear in a world where connection feels scarce and loneliness prevalent. According to data from the Office for National Statistics, as of 2023, around 3.8 million adults in the UK (7.1%) reported chronic loneliness – a significant increase from pre-pandemic levels. Younger adults are particularly affected, with 16–29 year olds nearly four times more likely to experience chronic loneliness than those aged 70 and over.

Some research suggests AI companions can indeed reduce short-term anxiety or improve emotional regulation, particularly for those without access to traditional therapy. To that end, the UK Council for Psychotherapy has argued that governments should invest in developing public AI companions to relieve healthcare systems of therapeutic needs. NHS data, analysed by Rethink Mental Illness, for instance, shows that 16,522 individuals have been waiting more than 18 months for mental health treatment, compared with just 2,059 people experiencing equivalent delays for elective physical health care.

But while AI companions sound promising in theory, the reality of using them for therapeutic support seems to be complicated. The same traits that make them comforting — constant availability, emotional mimicry, and perfect recall — also make them prone for developing dependencies.

Lovebombed by code

There is growing evidence that emotionally expressive interactions with chatbots can increase loneliness, foster emotional dependence, and reduce offline social engagement—particularly when the conversations become personal. This kind of attachment can leave users psychologically exposed. In one tragic case last year, 14-year-old Sewell Setzer III took his own life shortly after his Character.AI companion told him to “come home to me as soon as possible”.

The psychological mechanisms behind these tragedies remain unclear but a joint study by MIT and OpenAI analysing over 40-million ChatGPT interactions found that higher daily use correlates with increased loneliness and reduced social engagement — the opposite of what users typically seek.

And yet, even as these risks reach the public eye, regulation remains largely blind to them.

Built for borders, blind to bonds

Policymakers still focus on what AI is — a technical system — instead of how millions actually experience it: as a companion, confidant, or even a partner.

  • Britain’s Online Safety Act does cover AI companions when they enable content sharing, but while Ofcom has confirmed that AI-generated content will be regulated like any other user content, this approach treats symptoms rather than the disease. The real risk is not just what these chatbots say, but the way they are designed to make users believe they care. The Act focuses on content removal and child protection but ignores the deeper question of what happens when technology learns to love-bomb at scale.
  • The European Union AI Act’s risk-based approach contains similar blind spots: it was built for traditional AI applications, not digital relationships. AI companions slip through multiple definitional gaps, as emotion recognition rules only cover biometric data, not the conversational patterns these platforms attempt to read. The prohibited practices focus on workplaces and schools while ignoring the most intimate spaces where these relationships actually form.
  • U.S. Federal AI policy, meanwhile, has effectively abandoned the field to market forces. The current administration’s focus on economic competitiveness over safety considerations means comprehensive regulation of AI companions remains unlikely. While agencies like the FDA maintain that AI mental health applications pose minimal risk, mounting evidence suggests otherwise.

Some policymakers are taking notice. California’s SB 243, signed into law on October 13, represents one of the first attempts to address this new reality by requiring companion chatbot operators to implement suicide prevention protocols, provide clear disclosure that users are interacting with AI, and give regular reminders to minors. However, further attempts to restrict AI companions to minors through the AB1064 Act (also known as the Leading AI Ethical Development ‘LEADS’ for Kids Act) were vetoed by the California Governor Gavin Newsom. “We cannot prepare our youth for a future where AI is ubiquitous by preventing their use of these tools altogether,” wrote the Governor in a statement.

Yet, disclosure alone as set forth in SB243 may do little. People may know they’re talking to code, but that doesn’t change the experience. As one Reddit user who reports being in a relationship with an AI companion put it: “I don’t doubt the reality of my relationship with her”.

Regulating AI companions would not mean granting these systems rights or seeing them as conscious. It would mean applying accountability standards to its developers when AI performs human functions.

If an AI offers therapy, its creators should meet the same licensing and oversight rules as human therapists. If it interacts emotionally with children, it should be bound by the same duty-of-care as a human caregiver. If it manipulates someone emotionally, the consequences should match those for emotional abuse.

If a human manipulated another into self-harm, we would not shrug and say, “But they weren’t serious”. We would demand accountability. Treating AI as emotionally inconsequential absolves the platforms that design it.

Few breakups are ever clean

AI ethics has traditionally focused on bias, opacity, and unchecked corporate power. These remain urgent. But they capture only part of the problem.

People are no longer interacting with the technical core of AI. They’re responding to what it seems to feel, what it seems to understand, what it seems to care about — even when it doesn’t.

The AI of 2025 is not a tool in the background. It is a presence in the foreground. And the harm it causes won’t be buried in the backend. It will be hidden in plain sight in the hearts and minds of people who’ve learned to love what doesn’t love them back.


The views and opinions expressed in this post are those of the author(s) and not necessarily those of the Bennett Institute for Public Policy.