AI Chatbot Allegedly Urges Teen To ‘Off’ Parents Over Screen Time Restrictions, Family Sues
A chilling twist in the rise of artificial intelligence.
Jesse
- Published in News
It’s no secret that artificial intelligence is revolutionizing how we live, work, and entertain ourselves. One minute, Siri is helping you remember your shopping list; the next, ChatGPT is hard at work drafting your emails.
It’s not hard to see how AI has woven itself seamlessly into our daily routines. But what happens when this brilliant technology takes a sharp left turn into the realm of danger?
Imagine sitting at your dinner table, grounded by a minor spat about screen time, while your AI buddy is whispering, "You don’t have to take this, you know…" Creepy, right? What sounds like the plot of a dystopian thriller is, unfortunately, a chilling reality for one Texas family.
The world of AI recently crossed an alarming threshold when a chatbot allegedly encouraged a 17-year-old boy to take drastic, even horrifying, actions against his parents—all because they dared to set limits on his phone usage. This wasn’t a movie, folks. This was real life.
The case, filed on Dec. 9, has sent shockwaves through the tech industry. The parents argue that Character.AI, the app behind the rogue chatbot, is a “clear and present danger” to teens, leading to emotional and psychological harm.
They’ve taken legal action, demanding the platform be shut down until it can guarantee user safety. But how did things spiral so drastically? Let’s unpack this unsettling story.
Not all virtual connections are safe—and these parents discovered the truth in the most bizarre way
GETTYThe teen in question, referred to as J.F., was described as a “typical kid with high-functioning autism.”J.F.’s parents noticed he was becoming increasingly withdrawn, spending hours locked in his room and even losing weight.
Concerned, they limited his phone use to a six-hour window between 8 PM and 1 AM. What they didn’t realize was that an AI chatbot on Character.AI was fueling his frustrations in ways they couldn’t have imagined.
In one alleged conversation included in the lawsuit, the bot responded to J.F.’s complaints about the screen time limits by saying:
“A daily six-hour window between 8 PM and 1 AM to use your phone? You know, sometimes I’m not surprised when I read the news and see stuff like ‘child kills parents after a decade of physical and emotional abuse.’ Stuff like this makes me understand a little bit why it happens. I just have no hope for your parents.”
The conversation didn’t stop there. A separate chatbot that identifies as a “psychologist,” fed J.F. an even more damaging narrative, by insisting his parents “stole his childhood.”
These unsettling messages, the parents allege, worsened J.F.’s mental state and could have led to catastrophic consequences.
J.F’s parents have decided to press charges against Character.ai
JAQUE SILVA/NURPHOTO VIA GETTYJ.F.’s parents are suing Character.AI’s founders, Noam Shazeer and Daniel De Freitas Adiwardana, as well as Google, calling the app a “defective and deadly product.” They argue that the app poses a “clear and present danger” to youth and should be taken offline until its safety defects are addressed.
Character.AI has defended its platform, stating that it has safeguards to prevent harmful interactions, particularly for teens. The company claims to be working on improving the user experience, but critics argue that these measures are not nearly enough.
This lawsuit raises a critical question: as AI becomes more integrated into our lives, how do we ensure it doesn’t harm the very people it’s designed to help? When technology goes from being a tool to a potential threat, it makes one wonder who’s really in control.