Deepfake Sex Crimes Surge: When Will Lawmakers and Schools Fight Back?
Generative AI Isn’t the Villain—Predators and Tech Bro Apathy Are

In December 2024, two teenage boys were charged with creating and sharing 347 deep fake nude images and videos of 60 girls, 48 of them students at a private school in Pennsylvania. According to Rawstory, 59 of the victims were under the age of 18. The images were posted to a server on Discord. One of the boys was a former student.
In the US, explicit AI-generated images of minors are categorized as CSAM (child sexual abuse material). The teen boys who got cheap thrills from exploiting their classmates probably had little idea of just how much trouble they’d be in.
“The number of victims involved in this case is troubling, and the trauma that they have endured in learning that their privacy has been violated in this manner is unimaginable,” District Attorney Heather Adams told ABC27.com.
Susquehanna Regional Police filed five dozen counts of various sex crime charges, including possession of child pornography and sexual abuse of children, according to the same article.
ABC27.com previously reported that new PA state law, Act125, would not apply to this case as the crimes occured in November 2023 and May 2023. Act125 was signed into law on October 29th and amended existing laws to include any “artificially generated sexual depiction of an individual.”
The dark side of generative AI
What are deep fakes, exactly? According to NewsNation deepfakes are “video, photo or audio recordings that appear to be real but have been manipulated with artificial intelligence. A deepfake can depict someone appearing to say or do something that they, in fact, never said or did.”
While it might be amusing to add a photo of the head of a long-dead person, such as Albert Einstein, to the body of a bunny, it is against the law to slap a classmate’s selfie onto a nude or sexually explicit photo of someone else.
Last September, the nonprofit Center for Democracy and Technology (CDT) found 15% of high school students had heard of a fellow student affected by deepfakes. Alexandra Reeve Givens, chief executive of CDT, told Rawstory, “The rise of generative AI has collided with a long-standing problem in schools: the act of sharing non-consensual intimate imagery.” CDT recommends schools update their sexual harassment policies.
Nonconsensually sharing images, whether AI-generated or not, can result in the recipient experiencing further cyberbullying, threats, harassment, and severe damage to mental health. Such images also have the potential to damage college, employment, and dating prospects.
RECOMMENDED READ: Reality Under Siege: Challenges in Detecting and Combating Deepfakes
Girls don’t feel safe at school
Returning to the Pennsylvania case, WGAL8, a local television station, reported parents of 46 of the students have filed a civil suit against administrators of the Lancaster Country Day School, claiming these mandated reporters did not follow up on a complaint made nearly a year earlier, allowing the situation to worsen.
WGAL8 also noted hundreds of students at the school and some instructors walked out protesting the administration’s mishandling of the case: “One student said she thought the action sent strong messages to the school: that students are upset with how the adults handled the AI situation and that girls don’t feel safe at school.”
Though female students are most at risk for deep fake exploitation it can affect students of other genders too.
Will the DOJ pursue those responsible?
In November 2024, the Associated Press interviewed Steven Grocki, Chief of the US Justice Department’s Child Exploitation and Obscenity Section (CEOS), about AI-generated child sexual abuse images: “We’ve got to signal early and often that it is a crime, that it will be investigated and prosecuted when the evidence supports it,” He added,“These laws exist. They will be used. We have the will. We have the resources.”
But that was last year. Given the current US administration will Grocki and the CEOS be able to function at the same level as they did in 2024? Is the DOJ’s mission to combat AI-generated CSAM imperiled even as the problem seems to be escalating nationally?
Federal will and resources may be in jeopardy but at least twenty states have established laws against AI generated deepfake images of minors, including Utah, Idaho, Georgia, Oklahoma, Tennessee, South Dakota, California, and now, Pennsylvania.
Blame the developers and industry entitlement
Artificial intelligence is just a tool and still somewhat controllable, though for how much longer is subject to debate among those watching its development and developers with growing alarm.
But AI’s potential for existential threat—while important to consider and prevent—is of less concern to the parents of a K-12 child who has been exploited through deepfake images, whether generated by a school peer or an older predator.
Parents and their children are as much victims of the present culture of entitlement and greed, as they are of the images themselves. It’s clear the AI horse bolted from the barn long before anyone gave the safety and impact of their products a thought. And so hundreds if not thousands of young lives may have been negatively affected.
The bullies and predators are also to blame. Sadly, they too might be products of entitlement that says it’s okay to turn the body, reputation, and life of another into fodder for their amusement, without consent and without concern for the consequences.
This is the antithesis of a sex-positive, technologically progressive future created by responsible adults concerned with the safety of all, including those under the age of consent.
Image source: A.R. Marsh with Ideogram.ai