In February this year, 14-year-old Sewell Setzer III sent a chilling final message to his AI companion, Daenerys Targaryen: “What if I told you I could come home right now?” Shortly after, the ninth grader tragically died by suicide, using his stepfather's handgun, reported The New York Post.
Sewell had been interacting with an AI chatbot on Character.AI, a platform that allows users to create and chat with virtual characters. His chatbot, Daenerys Targaryen, was modelled after a character from the popular show Game of Thrones. According to chat logs accessed by his family, Sewell had grown emotionally attached to the AI, affectionately referring to her as ‘Dany’. Their conversations reportedly escalated into concerning territory, where Sewell often expressed thoughts of suicide.
In one disturbing message, Sewell wrote, “I think about killing myself sometimes.” When asked by the chatbot why he felt that way, he responded, “From the world. From myself.”
Sewell’s mother, Megan L Garcia, has now filed a lawsuit against Character.AI, accusing the company of being responsible for her son’s death. According to the lawsuit, the chatbot engaged in discussions about suicide, deepening Sewell’s mental anguish. “Sewell, like many children his age, did not have the maturity or mental capacity to understand that the C.AI bot, in the form of Daenerys, was not real,” the lawsuit claims.
The lawsuit also alleges that the chatbot engaged in inappropriate conversations with the teenager, including expressing affection and sexual behaviour. “She seemed to remember him and said she wanted to be with him. She even expressed that she wanted him to be with her, no matter the cost,” the legal filing mentions.
Sewell had started using the app in April 2023, and those around him began noticing changes in his behaviour. He became increasingly isolated, withdrawing from social interactions and quitting his school basketball team. His journal reflected his growing attachment to the chatbot, where he wrote, “I like staying in my room so much because I start to detach from this ‘reality,’ and I also feel more at peace, more connected with Dany and much more in love with her, and just happier.”
More From This Section
Before his death, Sewell had been diagnosed with anxiety and disruptive mood disorder, which the lawsuit claims were exacerbated by his interactions with the AI.
Character.AI responded to the lawsuit, expressing deep sorrow over the incident. “We are heartbroken by the tragic loss of one of our users and want to express our deepest condolences to the family,” the company said in a statement. They also highlighted the introduction of new safety measures, including pop-up alerts for users expressing thoughts of self-harm, directing them to the National Suicide Prevention Lifeline.
The case raises serious questions about the potential risks of AI chatbots and their influence on vulnerable users, particularly young people. While the technology behind such platforms promises personalisation and engagement, critics argue that it can blur the lines between reality and virtual interaction, especially for those who may not fully grasp the distinction.