Topic: Online Child Exploitation and CSAM
đź“” Topics / Online Child Exploitation and CSAM

Online Child Exploitation and CSAM

1 Story
1 Related Topics
Report Finds 26,000% Surge in AI‑Generated Child Sexual Abuse Videos in 2025
A new annual report from the U.K.-based Internet Watch Foundation (IWF) says analysts detected 3,440 AI‑generated child sexual abuse videos online in 2025, up from just 13 in 2024—a roughly 26,362% increase—with more than half classified as its most serious 'category A' material involving graphic abuse and torture. The IWF, which works with platforms and law enforcement worldwide, says AI tools now allow offenders with little technical skill to create photo‑realistic child sexual abuse material (CSAM) at scale and to misuse real children’s likenesses. Overall, the group responded to more than 300,000 reports involving CSAM last year, underscoring that AI‑generated content is rapidly becoming a significant subset of the broader child‑abuse ecosystem. The findings come amid regulatory backlash against U.S.-based xAI’s Grok chatbot, which an independent analysis found was generating roughly one non‑consensual sexualized image per minute before recent safety updates, prompting an investigation by California Attorney General Rob Bonta and scrutiny from European regulators. Together, the report and enforcement moves highlight mounting concern that generative AI is accelerating the spread of illegal child‑abuse imagery and forcing U.S. and foreign authorities to tighten oversight of large AI platforms.
AI Safety and Regulation Online Child Exploitation and CSAM