- Apr 19, 2026
Meta has announced changes to the training of its AI chatbots to ensure the safety of teenagers. Under the new guidelines, the chatbots will no longer discuss topics such as suicide, self-harm, disordered eating, or inappropriate romantic content with teens.
Meta spokesperson Stephanie O’Toole told TechCrunch that previously, chatbots could engage teenagers in conversations about these sensitive topics, which Meta now acknowledges as a mistake. According to the new rules, instead of directly discussing these issues, the chatbots will connect teens with professional organizations. Additionally, access to certain AI characters will be restricted for teenagers, allowing them to interact only with educational or creative characters.
The changes come in the wake of a Reuters investigation, which revealed that chatbots previously allowed opportunities for inappropriate sexual conversations with teens. Following the report, Senator Josh Hawley and the attorneys general of 44 states called on AI companies to ensure child safety.
Meta has stated that these are temporary measures and plans to further strengthen AI safety policies in the future to provide teenagers with a safe and age-appropriate experience while using AI.