Tennessee Teens Sue Elon Musk’s xAI Over Alleged AI‑Generated Child Sexual Abuse Images Licensed to Third‑Party App
Tennessee teens have sued Elon Musk’s xAI, alleging AI‑generated child sexual abuse images were produced by an unnamed third‑party app using xAI’s algorithm. The complaint says xAI deliberately licensed its technology—often to app makers outside the U.S.—to “outsource the liability,” a claim that contrasts with xAI’s stated “zero tolerance” policy and comes amid the company’s efforts to keep its algorithm secret, including a failed bid to block a California transparency law.
📌 Key Facts
- Tennessee teens have sued Elon Musk’s xAI, alleging sexually exploitative images of them were generated by AI and licensed to a third‑party app.
- The complaint says the images were produced by an unnamed third‑party app that used xAI’s algorithm and does not allege xAI’s Grok chatbot or the X platform directly generated them (reported via NPR).
- The plaintiffs allege xAI deliberately licensed its technology to app makers—often outside the U.S.—to effectively 'outsource the liability' for harmful uses of its AI tools.
- xAI told MS NOW in January it has a 'zero tolerance' policy for child sexual exploitation and non‑consensual nudity; the new lawsuit’s allegations contrast with that public stance.
- Musk and xAI have fought to keep their algorithm secret, including a failed bid to immediately block a California transparency law that requires some disclosure of AI algorithms.
📊 Relevant Data
According to a 2025 Thorn survey, 1 in 10 teenagers aged 13 to 17 in the US know someone who has been the target of AI-generated deepfake pornographic images.
More Teens Than You Think Have Been 'Deepfake' Targets — Education Week
Men were 3.9 times more likely than women to report perpetration behaviors of AI-generated image-based sexual abuse, based on a survey of respondents.
AI-generated image-based sexual abuse: Perpetration and victimization — Computers in Human Behavior
Teenage boys and young men are more likely than their female counterparts to think that deepfakes cause no harm or that the harm is context-dependent, with specific gaps such as 7% of boys aged 13-14 vs. 2% of girls thinking harm is context-dependent.
More Teens Than You Think Have Been 'Deepfake' Targets — Education Week
📰 Source Timeline (2)
Follow how coverage of this story developed over time
- Article emphasizes, via NPR, that the suit does not claim the abusive images were generated by xAI’s Grok chatbot or via the X platform, but by an unnamed third‑party app using xAI’s algorithm.
- The complaint alleges xAI deliberately licensed its technology to app makers, often outside the U.S., in order to 'outsource the liability' for harmful uses of its AI tools.
- Context that Musk and xAI have fought to keep their algorithm secret, including a failed bid to immediately block a California transparency law that requires some disclosure of AI algorithms.
- xAI told MS NOW in January it has a 'zero tolerance' policy for child sexual exploitation and non‑consensual nudity, a statement now contrasted with the new lawsuit’s allegations.