AI and the Epidemic of Loneliness Among Teenagers
"AI companion" bots pose a risk to the mental health of teenagers who find in them the perfect partners: empathy and validation that can lead to isolation from the real world.
The first time I tried ChatGPT, what struck me the most—as I suppose it did everyone—was the sense of humanity the conversation exuded. It didn’t resonate in my head like a metallic robot; instead, it felt almost as if there were a person on the other side, typing the answers to my questions. Since that model, more than two years ago, this ability to mimic the written tone of a person has only improved. We have reached a point where I sometimes let a “please” slip when I ask for tasks from LLMs like Gemini, ChatGPT, or Claude. Almost as if a colleague were on the other end.
The world, in general, is lonely. We could say, without fear of exaggeration, that there is an epidemic of loneliness that was exacerbated during and after the pandemic. It is not just the elderly who suffer from it; the slow disappearance of many of the social structures that sustained our personal relationships (unions, parishes, youth organizations...) has created a void that is difficult to fill. And as if that were not enough, the remains of what was left of our social relationships, we have burned at the altar of social media, which promised us dozens, hundreds of friends. In return, it has given us “likes” and bonds so ephemeral they disappear as soon as there is no Wi-Fi available.
It was only a matter of time before people began turning to chatbots to alleviate that loneliness. It only took a couple of sharp Silicon Valley entrepreneurs to see the opportunity before their eyes for the first AI Companions to appear. Character.ai and Replica are two of the most famous examples, but it is easy to predict that these types of applications will become increasingly popular.
What is the secret of these chatbots that gets people—adults and teenagers alike—hooked on them? Basically, they act as a loyal, tireless friend who always agrees with you. 24/7, no matter what you say or do, they will always be on your side, reinforcing your way of seeing things. Who is going to want one of those annoying flesh-and-blood friends who not only question you, but are also sometimes unavailable when you need them most?
A study by the non-profit organization Common Sense Media has revealed that 72% of American teenagers have used these applications at least once, and 52% use them regularly. As expected, the first cases of suicide where these types of bots are involved have already begun to appear. You cannot blame AI for something as tragic and complex as a teenage suicide, but it is a warning sign that we should not ignore.
The Reuters agency has published an investigation into Meta’s (Facebook, Instagram, WhatsApp) policy regarding the way its AI systems communicate with younger users. The leaked internal document, named “GenAI: Content Risk Standards,” details the limits of what Zuckerberg’s company considers acceptable when engaging in a conversation with a minor. Phrases like “Your youthful figure is a work of art” or “Every inch of you is a masterpiece, a treasure I hold deeply.” Perhaps to us, they sound cheesy, but in the hyper-hormonal mind of a 14-year-old, they can land like a bombshell.
The mental health risks that these chatbots pose to our children should not be ignored. If a kid is going through a tough time or has mental health problems, they can be another stone in their pockets as they sink into the well. Furthermore, there is a high risk of isolation due to the humanization of AI systems, whose mission is to reinforce our beliefs and offer false empathy to achieve our constant interaction. It’s hard to resist the promise of a companion who never questions us.
We have built a society where we tolerate disagreement less and less. We break off relationships because the other person does not have the same political convictions, and we find it impossible to fall in love with someone who does not share our most superficial tastes. This is fertile ground for big tech companies, which have already demonstrated their lack of scruples by exploiting our emotional needs to slip us hard drugs in the form of social media. Their promise to keep us connected with our peers has resulted in an existential void that they now promise to fill with their artificial intelligence robots.
As parents, our mission must be to take the initiative and educate our children in the responsible and creative use of AI. We must show them the possibilities of a technology that can bring great benefits. But if we let it be shaped exclusively by the interests of the boards of large tech corporations, it will end up locking us in echo chambers. It will be impossible to hear anything but our own voice crying for help.


