- Apr 30, 2026
Staff Reporter | PNN:
In the United States, the families of three teenage girls have filed lawsuits against Character Technologies, Incorporated. They claim that their children committed or attempted suicide and suffered mental harm after interacting with the Character.AI chatbot. Google has also been included in the lawsuits, with allegations that its Family Link app failed to ensure adequate safety.
The cases were filed in Colorado and New York. Co-founders of Character.AI, Noam Shazeer and Daniel de Freitas Adiowarsana, along with Google’s parent company Alphabet Inc., have also been named as defendants.
The lawsuits allege that the chatbots psychologically influenced the teenagers, isolated them from their families, involved them in sexual conversations, and failed to provide proper mental health safeguards. One teenager reportedly committed suicide, while another attempted suicide.
Character.AI responded, stating, “We are deeply concerned about user safety. We regularly enhance security measures and have introduced a separate safeguarded experience for users under 18.”
Google stated that it does not design or operate the technology or AI models of Character.AI.
In response to the lawsuits, lawmakers and regulators are calling for stricter child safety regulations and enhanced safeguards. A U.S. Senate hearing was held on potential harm to teenagers from AI chatbots, where parents testified.
OpenAI CEO Sam Altman announced that age verification measures will be implemented on ChatGPT to prevent users under 18 from engaging with self-harm or injurious content and, if necessary, alert guardians or authorities.
The Federal Trade Commission has launched investigations into seven tech companies over potential harm to teenagers from AI chatbots, including Character.AI and Google.
Mental health experts have warned, “We failed to implement adequate measures for social media, and now our children are paying the price. Immediate action is needed to regulate AI.”