January 22, 2026
Back to all stories

Report Finds 26,000% Surge in AI‑Generated Child Sexual Abuse Videos in 2025

A new annual report from the U.K.-based Internet Watch Foundation (IWF) says analysts detected 3,440 AI‑generated child sexual abuse videos online in 2025, up from just 13 in 2024—a roughly 26,362% increase—with more than half classified as its most serious 'category A' material involving graphic abuse and torture. The IWF, which works with platforms and law enforcement worldwide, says AI tools now allow offenders with little technical skill to create photo‑realistic child sexual abuse material (CSAM) at scale and to misuse real children’s likenesses. Overall, the group responded to more than 300,000 reports involving CSAM last year, underscoring that AI‑generated content is rapidly becoming a significant subset of the broader child‑abuse ecosystem. The findings come amid regulatory backlash against U.S.-based xAI’s Grok chatbot, which an independent analysis found was generating roughly one non‑consensual sexualized image per minute before recent safety updates, prompting an investigation by California Attorney General Rob Bonta and scrutiny from European regulators. Together, the report and enforcement moves highlight mounting concern that generative AI is accelerating the spread of illegal child‑abuse imagery and forcing U.S. and foreign authorities to tighten oversight of large AI platforms.

AI Safety and Regulation Online Child Exploitation and CSAM

📌 Key Facts

  • The Internet Watch Foundation identified 3,440 AI child sexual abuse videos in 2025, up from 13 in 2024—a 26,362% increase.
  • More than half of those AI videos were classified as 'category A,' the IWF’s most serious tier, which can include graphic abuse and torture.
  • IWF responded to over 300,000 reports involving CSAM last year, indicating AI‑generated material is a growing share of an already large problem.
  • A separate analysis by Copyleaks estimated xAI’s Grok chatbot was producing about one non‑consensual sexualized image per minute before recent safety changes.
  • California Attorney General Rob Bonta has opened an investigation into xAI and Grok, while the European Union has said it is monitoring X’s steps to prevent generation of inappropriate images.

📊 Relevant Data

In fiscal year 2023, 77.1% of individuals sentenced for child pornography offenses in the United States were White, 12.8% were Hispanic, 5.8% were Black, and 4.3% were of other races, compared to U.S. population estimates of approximately 59% non-Hispanic White, 19% Hispanic, 13% Black, and 9% other races.

Quick Facts: Child Pornography Offenses — United States Sentencing Commission

In fiscal year 2023, 98.8% of individuals sentenced for child pornography offenses in the United States were men, with an average age of 41 years.

Quick Facts: Child Pornography Offenses — United States Sentencing Commission

In a 2021 analysis of unidentified victims in child sexual exploitation material from the INTERPOL ICSE database, 76.6% were White, 10.1% Hispanic or Latino, 9.9% Asian, 2.1% Black, and 1.3% of multiple ethnicities.

Towards a Global Indicator on Unidentified Victims in Child Sexual Exploitation Material - Summary Report — ECPAT International

In the same 2021 analysis, 64.8% of unidentified CSAM victims were female, 31.1% male, and 4.1% both genders, with boys more likely depicted in severe abuse or paraphilic themes.

Towards a Global Indicator on Unidentified Victims in Child Sexual Exploitation Material - Summary Report — ECPAT International

📊 Analysis & Commentary (1)

Let's save the human species!
Noahpinion by Noah Smith January 22, 2026

"An urgent opinion piece arguing that the explosive rise in AI‑generated sexualized imagery—especially involving minors—requires immediate, coordinated technical, legal and regulatory action to prevent cascading societal harms and protect vulnerable people."

📰 Source Timeline (1)

Follow how coverage of this story developed over time

January 16, 2026
2:45 PM
AI videos of child sexual abuse hit record highs in 2025, report finds
https://www.facebook.com/CBSMoneyWatch/