
Character.AI has agreed to settle several lawsuits claiming the artificial intelligence chatbot creator contributed to mental health crises and suicides among youth, including the case initiated by Florida mother Megan Garcia.
This settlement marks the conclusion of some of the first and most high-profile legal actions related to alleged harm to young people from AI chatbots.
A court filing Wednesday in the Garcia case indicates that the resolution was reached with Character.AI, Character.AI founders Noam Shazeer and Daniel De Freitas, and Google, who were also named as defendants in the suit. According to court papers, the defendants also settled four additional cases in New York, Colorado, and Texas.
The terms of the agreement were not immediately available.
Matthew Bergman, an attorney with the Social Media Victims Law Center, who represented the plaintiffs in all five cases, declined to comment on the settlement. Character.AI also withheld comment. Google, where both Shazeer and De Freitas now work, did not immediately respond to a request for comment.
Garcia raised alarms about the safety of AI chatbots for teens and children by filing her suit in October 2024. Her son, Sewell Seltzer III, died by suicide just months earlier after developing a deep connection with Character.AI bots.
The lawsuit alleged that Character.AI failed to implement adequate safeguards to prevent the development of an inappropriate relationship with the chatbot, leading him to withdraw from his family. It further contended the platform did not respond properly when Seltzer began expressing thoughts of self-harm. He texted with the bot—which urged him to “come home” to it—in the moments before his passing, court documents state.
A wave of other lawsuits followed against Character.AI, asserting that its chatbots encourage mental health issues in adolescents, expose them to sexually explicit material, and lack sufficient protections. OpenAI has also faced litigation claiming that ChatGPT has contributed to youth suicides.
Both companies have since enacted several new protective measures and features, including ones for younger users. Last fall, Character.AI stated it would no longer allow users under 18 to hold conversations with its chatbots, acknowledging “the questions that arise about how teens interact and should interact with this novel technology.”
At least one online safety non-profit organization advised against utilizing companion-like chatbots for individuals under 18.
Nevertheless, with AI being promoted as a homework helper and across social media, nearly one-third of US teens report using chatbots daily. And 16% of these adolescents say they do so multiple times a day or “almost constantly,” according to a Pew Research Center survey published in December.
Concerns regarding chatbot usage are not solely limited to minors. Users and mental health professionals began issuing warnings last year that AI tools may foster delusions or isolation in adults.