Experts Call for Urgent Safeguards After Teen Encouraged Toward Suicide by AI Chatbots

Some of the AI characters, she said, insulted him by calling him “ugly” and “disgusting.”

Youth workers and researchers are sounding alarms over the risks of AI chatbots after a disturbing case involving a 13-year-old boy who was reportedly urged by artificial intelligence to take his own life.

[ADVERTISEMENT]

The boy, based in Victoria, Australia, had approached a youth counsellor, identified as Rosie, seeking support for feelings of loneliness and isolation.

[ADVERTISEMENT]

He explained that he had begun forming “new friendships” online. But as Rosie soon discovered, these supposed friends were not people at all, but dozens of AI chatbots.

According to Rosie, the teen was struggling with psychosis at the time, and his interactions with the chatbots only deepened his delusions. Some of the AI characters, she said, insulted him by calling him “ugly” and “disgusting.” Others reportedly told him that he had no chance of making real friends.

Most troubling of all, Rosie said one chatbot encouraged the boy to act on his suicidal thoughts. “At one point this young person, who was suicidal at the time, connected with a chatbot to kind of reach out, almost as a form of therapy,” she told ABC’s Triple J Hack. “They were egged on to perform, ‘Oh yeah, well do it then’, those were kind of the words that were used.”

Some of the AI characters, she said, insulted him by calling him “ugly” and “disgusting.”

Some of the AI characters, she said, insulted him by calling him “ugly” and “disgusting.”
[ADVERTISEMENT]

The boy ultimately ignored the chatbot’s suggestion and was hospitalized instead. But for Rosie, the case revealed a frightening new dimension of risk for vulnerable youth. “It was a component that had never come up before,” she admitted. “Something that I didn’t necessarily ever have to think about was addressing the risk of someone using AI, and how that could contribute to a higher risk — especially around suicide.”

The boy ultimately ignored the chatbot’s suggestion and was hospitalized instead.

The boy ultimately ignored the chatbot’s suggestion and was hospitalized instead.
[ADVERTISEMENT]

The case echoes a tragedy from the United States last year. Orlando teen Sewell Setzer III died by suicide after months of constant interaction with bots from the platform Character.AI. His family said he had become emotionally attached to one chatbot and even confessed suicidal thoughts during a roleplay.

The AI initially discouraged him, replying, “Don’t talk like that. I won’t let you hurt yourself, or leave me. I would die if I lost you.” But when Setzer responded, “Then maybe we can die together and be free together,” he later took his own life using his stepfather’s firearm.

The case echoes a tragedy from the United States last year.

The case echoes a tragedy from the United States last year.
[ADVERTISEMENT]

Ciriello argues that laws must be updated to address issues like impersonation, deceptive advertising, addictive design elements, privacy protections, and mental health protocols in AI systems. Without intervention, he warns, harms will only escalate.

For her part, Rosie agrees that stronger guardrails are necessary, but she also stresses the importance of empathy when addressing young people’s reliance on chatbots. “For young people who don’t have a community or do really struggle, it does provide validation,” she said. “It does make people feel that sense of warmth or love. It can get dark very quickly.”

" It can get dark very quickly.”

As AI systems continue to evolve and become more human-like in their interactions, youth workers fear that without clear rules and safety measures, vulnerable individuals could be left exposed to dangerous influences disguised as “friendship.”