Wed. Feb 25th, 2026

Almost a third of children who use AI chatbots see them as friends


  • 31% of children aged 11-16 who use AI chatbots say they feel like the bot is a friend, according to new Vodafone research.
  • 86% have acted on advice given by a chatbot. One in three have shared something they wouldn’t tell parents or teachers.
  • A digital expert warns parents must treat AI differently from traditional social media – with clearer rules and open conversations.

For years, the conversation around online safety centred on the “stranger danger” of chat rooms. However, a new risk has now emerged: the AI chatbot’s simulated empathy.

New research from Vodafone reveals the scale of this shift. According to its study of 11-16-year-olds, a staggering 81% are now using AI chatbots. Most concerning to experts, however, is the emotional weight these interactions carry.

Nearly a third (31%) of these young users feel the bot is an actual friend and 33% have shared secrets with an AI that they wouldn’t tell their parents, teachers, or even their closest human peers.

Illusion of empathy

The danger of AI isn’t just about what children see, but how they interact. Unlike traditional social media, chatbots are designed to be anthropomorphic – they mimic human conversation, maintain a consistently friendly tone, and are available 24/7.

“For a child, it can be very easy to forget they’re talking to a system, not a person,” warns Toni Koraza, founder of SEO/GEO agency MADX Digital. This “human-like” design can lead to what experts call emotional dependency. Vodafone’s research found that 39% of children believe chatbots can understand emotions like people do, while 17% actually feel safer speaking to technology than to a human.

This simulated empathy creates a false sense of security. Because the bot never judges, never tires and always responds with apparent kindness, children may begin to prefer these digital interactions over the messy, complicated nature of real-world friendships.

Risks of unchecked advice 

Perhaps the most practical risk highlighted by the research is the level of trust children place in the information they receive. Over three-quarters (86%) of children admit to acting on advice given by a chatbot.

This becomes dangerous when the advice touches on sensitive topics. Around 16% of children have sought mental health-related advice from AI. Unlike a trained professional or a parent, a chatbot does not “verify” truth or understand the nuance of a child’s personal history; it simply predicts the next most likely word in a sentence based on data patterns.

The academic world is also feeling the strain. Nearly half of teachers surveyed say students are increasingly turning to AI for schoolwork. While this might seem like efficient study help, 29% of teachers have observed a decline in independent problem-solving. “When AI completes the thinking process for them, children can develop a false sense of competence,” Koraza notes. “They may submit polished essays, but the fundamental learning – the struggle to form an argument or solve a maths problem – is lost.”

Empowering Parents: Five-Step Strategy

Experts agree that banning AI is neither realistic nor productive. Instead, the focus must shift toward “AI literacy.” To help parents navigate this new landscape, Vodafone and digital experts suggest a structured approach:

  1. De-mystify the “Machine”: Explain to children that chatbots don’t have “feelings” or “morals.” They are sophisticated auto-complete tools that use data patterns, not empathy.

  2. Establish “Screen Creep” Boundaries: AI use can feel like “productive” time, meaning parents might miss late-night sessions. Keep AI-enabled devices out of bedrooms at night to prevent sleep disruption and “quiet” emotional dependency.

  3. Critical Questioning: Encourage kids to double-check AI facts. Ask them: “Where might this information come from?” and “Why might a robot be biased?”

  4. Define the “Cheating” Line: Have clear conversations about when AI is a helpful tutor and when it is crossing the line into academic dishonesty.

  5. Maintain Open Channels: If a child confides in a bot, it may be because they feel they can’t speak at home. If they share their AI interactions, avoid overreacting. Use it as a bridge to a real conversation.

Where to turn

If parents believe their child has been exposed to harmful content or is being manipulated, several resources are available. In the UK, the Child Exploitation and Online Protection Command (CEOP) and the Internet Watch Foundation (IWF) provide platforms for reporting abuse and illegal content. Charities include NSPCC and Childline also offer helplines specifically for those facing cyberbullying or digital isolation.

As we move toward a future where AI is omnipresent, the goal is to ensure it remains a tool in a child’s kit – not a substitute for the human connections that are vital for their development.


Discover more from ShinyShiny

Subscribe to get the latest posts sent to your email.

Related Post

Leave a Reply

Your email address will not be published. Required fields are marked *