Tennessee Teens Sue Elon Musk’s xAI Over Alleged AI‑Generated Child Sexual Abuse Images
Three Tennessee teenagers have filed a class action lawsuit alleging Elon Musk’s AI company xAI licensed its large language model to an app that was used to generate nonconsensual nude and sexually explicit images and videos of them when they were minors. The complaint, filed March 16, 2026, says a male acquaintance used an unnamed app powered by xAI’s model, along with photos from social media and a school yearbook, to create lifelike sexualized depictions that were not labeled as AI-generated and were traded online; the perpetrator has since been arrested, according to the filing. Plaintiffs argue xAI knowingly licensed its technology to third-party app makers, including outside the U.S., as a way to "outsource" liability for dangerous uses and note that xAI has not adopted watermarking standards that Google and OpenAI use to flag AI-generated images. The suit follows an earlier case brought by influencer Ashley St. Clair over alleged AI-generated nude images from when she was a teenager, underscoring mounting legal pressure on xAI over sexual content and child protection. The teens seek damages for emotional distress and other harms and aim to force major AI companies to redesign their business incentives around sexually explicit content, a question already driving intense public debate over whether existing U.S. law is adequate for AI "nudifying" and deepfake tools.
📌 Key Facts
- Three Tennessee teenagers (Jane Does 1–3) filed a class action lawsuit against xAI on March 16, 2026.
- The complaint alleges an app powered by xAI’s model was used to create nonconsensual nude and sexually explicit images and videos of the plaintiffs as minors.
- The alleged perpetrator used personal photos, yearbook images, and social media pictures to generate the content and has been arrested, according to the suit.
- Plaintiffs say xAI deliberately licensed its technology to app makers, often outside the U.S., to "outsource" liability and has not implemented AI-origin watermarks like Google and OpenAI.
- The lawsuit seeks damages for emotional distress and aims to change how AI companies handle and monetize sexually explicit content involving minors.
📊 Relevant Data
Reports of AI-generated child sexual abuse material surged from 4,700 in 2023 to more than 400,000 in 2025.
Reports of AI-generated child sexual abuse material surge — KOLN
98% of all deepfake content online is non-consensual and pornographic, with 99% of those depicted being women.
The Impact of Artificial Intelligence on Violence Against Women and Girls — Stimson Center
1 in 6 minors involved in potentially harmful online sexual interactions, including deepfake pornography, never disclose it, with boys less likely to disclose than girls.
The Impact of Deepfakes, Synthetic Pornography, & Virtual Child Sexual Abuse Material — American Academy of Pediatrics
📰 Source Timeline (1)
Follow how coverage of this story developed over time