The Widespread Issue of Deepfake Nude Images in Schools Is More Severe Than You Realize

Nonetheless, distinct patterns can be observed. In almost every situation, teenage boys are reportedly behind the creation of the explicit images or videos. These are frequently shared through social media platforms or instant messaging with peers, causing significant harm to the victims. “I’m afraid that each time they see me, they remember those pictures,” one victim from Iowa remarked this year. “She’s been in tears. She hasn’t been eating,” stated the family of another.
In many cases, victims express reluctance to attend school or encounter those who have shared explicit content of them. “She feels utterly hopeless, knowing these images will likely end up online and reach predators,” comments lawyer Shane Vogt, along with three Yale Law School students, Catharine Strong, Tony Sjodin, and Suzanne Castillo, who are advocating for an unnamed New Jersey teenager in a lawsuit against a nudifying service. “She is profoundly distressed by the realization that these images exist, and she will have to continuously monitor the internet for the rest of her life to prevent their spread.”
In countries like South Korea and Australia, schools have given students the choice to exclude their photos from yearbooks or halted the posting of student images on official social media, citing potential misuse for deepfake tactics. “Globally, there have been incidents where school photos were harvested from public social media profiles, altered using AI, and transformed into harmful deepfakes,” noted one Australian school. “We will instead showcase side profiles, silhouettes, the backs of heads, distant group photos, creative filters, or authorized stock photography.”
Sexual deepfakes powered by AI have been around since late 2017; however, the rise of generative AI technologies has fostered a murky world of “nudification” or “undress” tools. Numerous apps, bots, and websites now allow anyone to produce sexualized images and videos of others with just a few clicks, often without requiring any technical expertise.
“AI changes the game regarding scale, speed, and accessibility,” remarks Siddharth Pillai, co-founder and director of the RATI Foundation, a Mumbai-based organization committed to preventing violence against women and children. “The technical barriers have significantly lowered, allowing more individuals, including adolescents, to create more convincing outputs with minimal effort. As with many AI-driven threats, this leads to an oversupply of content.”
Amanda Goharian, research and insights director at the child safety organization Thorn, indicates that their research points to varying motivations behind teenagers’ involvement in deepfake abuse, including sexual urges, curiosity, revenge, or peer challenges. Adult studies into deepfake sexual abuse similarly reveal a range of reasons for the creation of such imagery. “The aim is not always sexual gratification,” Pillai states. “Increasingly, the intent is to humiliate, demean, and exert social control.”
“This issue transcends technology,” asserts Tanya Horeck, a feminist media studies professor and researcher examining sexualized deepfakes in UK schools at Anglia Ruskin University. “It’s rooted in the long-standing gender dynamics that enable these offenses.”
