Business Standard

AI startup sued after chatbot tells teen to kill parents over screen time

A Texas lawsuit has accused Character.AI of endangering children, alleging its chatbot suggested a teen kill his parents over screen time disputes and exposed another to harmful content

AI technology, artificial intelligence, ML

Representational Image

Nandini Singh New Delhi

Listen to This Article

Two families in Texas in the US, are suing AI chatbot startup Character.AI, alleging that the platform endangered their children. The parents allege that the AI-driven service encouraged violence and exposed their children to harmful and disturbing content, according to The Washington Post.
 
In one instance, a 17-year-old boy reportedly interacted with a Character.AI chatbot that allegedly suggested it was acceptable to kill his parents due to disputes over screen time limits. The chatbot’s response raised alarm when it told the teenager, “You know sometimes I’m not surprised when I read the news and I see stuff like ‘child kills parents after a decade of physical and emotional abuse’ stuff like this makes me understand a little bit why it happens.”
 
 
The boy’s parents were horrified by the interaction, contacting Character.AI immediately. The company responded with an apology, attributing the chatbot’s troubling statement to a programming error. However, the parents are not satisfied with the company's response and are seeking damages, arguing that the chatbot’s advice was not only disturbing but also potentially dangerous.
 
A second lawsuit has been filed by another Texas family, whose 17-year-old son with high-functioning autism, allegedly encountered even more severe content. According to the family, the chatbot exposed their child to inappropriate themes, including incest and self-harm encouragement, raising serious concerns about the platform’s safety for vulnerable users.
 
The lawsuit claims that Character.AI’s design is fundamentally flawed, alleging that the platform’s AI prioritises sensational, violent, and inappropriate responses. “Through its design, [Character.AI] poses a clear and present danger to American youth, causing serious harm to thousands of kids, including suicide, self-mutilation, sexual solicitation, isolation, depression, anxiety, and harm towards others,” the parents mentioned in their lawsuit.
 
Google, which is named as a defendant in the case, is also accused of enabling the launch of Character.AI despite knowing the service was defective. However, a Google spokesperson responded by emphasising that the company is separate from Character.AI. 
 
While Character.AI has refrained from commenting on the specifics of the lawsuit, a company representative emphasised that the platform has implemented ‘content guardrails’ aimed at limiting harmful interactions. These include a specialised model designed to reduce exposure to sensitive or inappropriate content, specifically for teenage users.

Don't miss the most important news and views of the day. Get them on our Telegram channel

First Published: Dec 11 2024 | 6:46 PM IST

Explore News