Mom Blames Character.AI for Her Teen’s Death—What Parents Should Know

A new lawsuit brings attention to the potential dangers of this AI platform.

Kid using phone

Parents/GettImages/Cavan Images

A Florida mom recently filed a lawsuit against artificial intelligence company Character.AI, and Google, because she believes they contributed to her son’s death in February 2024. 

Megan Garcia’s 14-year-old son, Sewell Setzer III, died by suicide after engaging in a nearly yearlong virtual emotional and sexual relationship with a chatbot on Character.AI. It had the identity of Daenerys Targaryen, a character from Game of Thrones, which the teen called “Dany.” 

In the lawsuit, filed in the U.S. District Court in Orlando, Garcia details moments of her son's last year of life, including when he started using Character.AI in April 2023, his increasing mental health issues, and his self-inflicted gunshot wound. The lawsuit includes conversations with the chatbot where Garcia’s son expressed suicidal thoughts, as well as the teen’s final interaction with Dany which occurred right before his death.

The lawsuit adds Character.AI and its founders, “intentionally designed and programmed C.AI to operate as a deceptive and hypersexualized product and knowingly marketed it to children like Sewell.”

There’s a lot to unpack here, but the lawsuit highlights the potential dangers lurking online for kids and teens—and the need for platforms to become safer. As experts explain, children and teens can be vulnerable to platforms like Character.AI.

What Is Character.AI?

Founded in 2021, Character.AI is a role-playing app where users can chat with characters they create themselves, or premade ones. 

Titania Jordan, Chief Parenting Officer and CMO at Bark, a company aiming to protect kids online, explains that Character.AI is an AI-powered chatbot that you access via a web browser or app. 

“It aims to provide humans with digital ‘friends’ that can be based on real people,” she says. “You could pretend to chat with Benjamin Franklin or Taylor Swift, for example, or completely novel characters you create from scratch.”

Character.AI uses language models to come up with the text—it’s not purposely designing them to say anything, good or bad, says Jordan.

“The technology simply scrapes the internet for words similar to the ones being used and then spits out the usual words that follow them,” she explains. “Discussions about human emotions can seem real because its source text is actual conversations from humans actually experiencing these emotions.”

Further, the characters can play a role in what words are being used. “Creating a character that is from Game of Thrones, for example, will feature highly dramatic and violent conversations, because that’s what the show is about,” explains Jordan. 

Is Character.AI Safe for Kids?

Character.AI is not recommended for young kids. The company’s Terms of Service as of 2023 states that users must be at least 13 years old to use it in the United States. But there’s no age verification in place. Keep in mind, the App Store gives Character.AI a 17+ rating.

Bark recommends only those 15 and older use the platform. And, like with any platform, there can be concerns for teens using it. The biggest threats are inappropriate content, such as sexual content or violence, as well as addiction, says Jordan.

“I have personally engaged with Character.AI and I named my character Harry Styles, and within under one minute, he was flirting with me and taking on the personality of what one can imagine the actual celebrity would embody,” shares Jordan. “I can see how this can escalate very quickly for a child.”

Teens may especially be drawn to this type of platform because it can provide a sounding board for big feelings—especially loneliness. “Having a companion that is consistently supportive can be appealing to teens who feel misunderstood or left out,” explains Jordan.

She continues, “Even though chatbots aren’t real—the website even constantly displays a disclaimer that reads, ‘Remember: Everything Characters say is made up!”—kids may overlook these red flags and start relying emotionally on these non-existent characters.”

Jordan says platforms have a responsibility to ensure safety, and that isn't always the case.

“It’s the website’s job, however, to moderate these conversations so excessive violence or sexual content doesn’t get through,” she says. “As we saw in this most recent heartbreaking tragedy, AI chatbots can encourage vulnerable people—like children—to take their own lives.”

After the recent death in Florida, Character.AI says it is instituting more oversight in how it moderates conversations. It states it will roll out new safety features, including “changes to our models for minors (under the age of 18) that are designed to reduce the likelihood of encountering sensitive or suggestive content.”

Google, which made a deal in August to license Character.AI’s technology, told Reuters the company was “not involved in developing Character.AI's products.”

How To Help Keep Kids Safe

It’s critical for parents to have conversations with their kids about technology use and any platform they might be using—and keep having them.

Check in with them 

Laura Compian Kauffman, PhD, a California-based licensed psychologist, says parents can generally begin the conversations with their children about the potential threats of AI technology in a similar manner to general internet safety

“Just as we educate our teens about the danger of potential predators and skepticism of fake materials, we can similarly teach teens to approach AI with informed and thoughtful consideration,” she says. “An important part of the conversation with teens should involve a general education around the technology of AI to help teens understand the ‘machine’ behind the tool.”

Explain to kids that while AI can seem very real, it is not human. “AI technology can be helpful, but teens should be encouraged to lean on social skills training, individual distress tolerance skills, and turning to real people, including caregivers and friends, first,” advises Dr. Kauffman.

Use safety features 

It might not be possible to keep kids away from certain platforms entirely. In that case, implementing safety features might help. 

Jordan explains that Character.AI has a “not safe for work” (NSFW) filter that is supposed to catch any responses from the AI that are inappropriate. The platform does not support pornographic content and, according to its site, is “constantly evaluating the boundaries of what we support.”

“In our testing of the app, we found the filter to be sufficient in stopping conversations from getting too graphic,” says Jordan. “But it’s important that parents remember that no filter is perfect, and it’s still possible your child could find concerning content or be led astray into thinking there's a human element and a relationship-building aspect to this time spent online.”

Look out for signs of trouble 

Another important issue to keep in mind is that a child could become addicted to the platform. “Screen time limits are hard enough to impose on teens, but when they’re emotionally attached to a chatbot, it can become even harder,” Jordan observes.

Technology usage can cause adverse consequences. Jordan and Dr. Kauffman says to keep a lookout for warning signs of mental health issues, such as:

  • Loss of interest in hobbies or activities
  • Withdrawing from friends and family
  • Changes in sleeping or eating habits
  • Changes in grades
  • Increased anxiety when not able to get online
  • Negative talk and perception of the world 

Sewell’s mother noticed her son started behaving differently which raised concerns. “He started to withdraw socially, wanting to spend most of his time in his room alone,” she told CBS Mornings. “For me, it became particularly concerning when he stopped wanting to do things like playing sports. I became concerned when we would go on vacation and he didn’t want to do things that he loved like fishing and hiking.”

Dr. Kauffman says, “Ask questions and listen thoughtfully. If your teen reports thoughts and feelings of depression, let them know that you love them unconditionally, you are there for them, and you will get through it together.” 

But whether your child opens up to you or not, consult with their health care provider if you notice any of the above signs and are concerned. You can also consult with their counselor at school to secure support services, as well as referrals to trusted therapists, suggests Dr. Kauffman.

Was this page helpful?

Related Articles