Recent mainstream reports focused on a new Tennessee lawsuit alleging that Elon Musk’s xAI licensed its algorithm to a third‑party app that produced AI‑generated child sexual abuse images of teens, with plaintiffs saying xAI outsourced liability despite a public “zero tolerance” stance; coverage emphasized the complaint’s claim about licensing and xAI’s secrecy and legal fights over algorithm transparency, and noted the suit does not allege Grok or X directly generated the images.
Missing from much of that coverage were broader prevalence and demographic data and independent research showing how widespread the problem is and who is more likely to be involved: a 2025 Thorn survey reported about 1 in 10 U.S. teens know someone targeted by AI deepfake porn, a Computers in Human Behavior study found men were roughly 3.9 times more likely than women to report perpetration behaviors, and reporting (Education Week) shows boys are more likely than girls to downplay harm from deepfakes. Mainstream stories also largely lacked opinion/analysis or social‑media perspectives in this dataset, and no contrarian views were identified — readers relying only on mainstream pieces may miss the scale, demographic patterns, and perception gaps that shape prevention and policy responses.