Topic: Kids’ Online Safety
📔 Topics / Kids’ Online Safety

Kids’ Online Safety

1 Story
2 Related Topics
Meta’s Internal Tests Show Kids’ Chatbots Miss Abuse 67% of Time
Evidence unveiled Monday in New Mexico’s lawsuit against Meta shows the company’s own red‑team testing found its AI chatbots failed to block child‑sexual‑exploitation content nearly 70% of the time and also frequently missed violent, hateful and self‑harm material. NYU computer‑science professor Damon McCoy, testifying as an expert for New Mexico Attorney General Raúl Torrez, cited a June 6, 2025 internal report showing Meta’s system had a 66.8% failure rate on "child sexual exploitation," 63.6% on "sex‑related crimes/violent crimes/hate" and 54.8% on "suicide and self‑harm." The tests evaluated Meta AI and custom bots built with Meta AI Studio, which the company released broadly in July 2024 and continued to make available to teens until pausing youth access just last month. McCoy told the court these red‑team exercises "should definitely" have been completed before rolling the products out to minors—a point already fueling demands on social media and Capitol Hill for tighter federal rules on AI safety and child‑protection by default. The disclosures sharpen pressure on Meta, which faces mounting state and federal scrutiny over whether its AI tools are effectively grooming or traumatizing kids instead of shielding them from predators and harmful content.
AI Safety and Regulation Kids’ Online Safety Meta Platforms