Schools face a growing problem of students using artificial intelligence to transform innocent images of classmates into sexually explicit deepfakes.
The fallout from the spread of the manipulated photos and videos can create a nightmare for the victims.
The challenge for schools was highlighted this fall when AI-generated nude images swept through a Louisiana middle school. Two boys ultimately were charged, but not before one of the victims was expelled for starting a fight with a boy she accused of creating the images of her and her friends.
âWhile the ability to alter images has been available for decades, the rise of A.I. has made it easier for anyone to alter or create such images with little to no training or experience,â Lafourche Parish Sheriff Craig Webre said in a news release. âThis incident highlights a serious concern that all parents should address with their children.â
A school bus carries children at the end of a school day Dec. 11Â at Sixth Ward Middle School in Thibodaux, La.Â
More states pass laws
The prosecution stemming from the Louisiana middle school deepfakes is believed to be the first under the stateâs new law, said Republican state Sen. Patrick Connick, who authored the legislation.
The law is one of many across the country taking aim at deepfakes. In 2025, at least half the states enacted legislation addressing the use of generative AI to create seemingly realistic, but fabricated, images and sounds, according to the National Conference of State Legislatures. Some of the laws address simulated child sexual abuse material.
Students also were prosecuted in Florida and Pennsylvania and expelled in places like California. One fifth grade teacher in Texas also was charged with using AI to create child pornography of his students.
Easier to create
Until the past few years, people needed some technical skills to make deepfakes realistic, said Sergio Alexander, a research associate at Texas Christian University who has written about the issue.
âNow, you can do it on an app, you can download it on social media, and you donât have to have any technical expertise whatsoever,â he said.
The National Center for Missing and Exploited Children said the number of AI-generated child sexual abuse images reported to its cyber tipline soared from 4,700 in 2023 to 440,000 in just the first six months of 2025.
Experts worry
Sameer Hinduja, the co-director of the Cyberbullying Research Center, recommends that schools update their policies on AI-generated deepfakes and get better at explaining them. That way, he said, âstudents donât think that the staff, the educators are completely oblivious, which might make them feel like they can act with impunity.â
He said many parents assume that schools are addressing the issue when they arenât.
âSo many of them are just so unaware and so ignorant,â said Hinduja, who is also a professor in the School of Criminology and Criminal Justice at Florida Atlantic University.Â
Harmful trauma
AI deepfakes are different from traditional bullying because instead of a nasty text or rumor, there is a video or image that often goes viral and then continues to resurface, creating a cycle of trauma, Alexander said.
Many victims become depressed and anxious, he said.
âThey literally shut down because it makes it feel like, you know, thereâs no way they can even prove that this is not real â because it does look 100% real,â he said.
Parental involvement
Parents can start the conversation by casually asking their kids if theyâve seen any funny fake videos online, Alexander said.
Take a moment to laugh at some of them, like Bigfoot chasing after hikers, he said. From there, parents can ask their kids, âHave you thought about what it would be like if you were in this video, even the funny one?â And then parents can ask if a classmate has made a fake video, even an innocuous one.
âBased on the numbers, I guarantee theyâll say that they know someone,â he said.
If kids encounter things like deepfakes, they need to know they can talk to their parents without getting in trouble, said Laura Tierney, the founder and CEO of The Social Institute, which educates people on responsible social media use and has helped schools develop policies.Â
She uses the acronym SHIELD as a roadmap for how to respond. The âSâ stands for âstopâ and donât forward. âHâ is for âhuddleâ with a trusted adult. The âIâ is for âinformâ any social media platforms on which the image is posted. âEâ is a cue to collect âevidence,â like who is spreading the image, but not to download anything. The âLâ is for âlimitâ social media access. The âDâ is a reminder to âdirectâ victims to help.
âThe fact that that acronym is six steps I think shows that this issue is really complicated,â she said.



