The Risks of Deepfake “Nudify” Technology Are Increasing and Becoming More Alarming

Visit the website of a well-known deepfake generator, and you’ll encounter a menu of disturbing options. With a few clicks, you can transform a single image into an eight-second explicit video, placing women into realistic and graphic sexual scenarios. “Utilize our cutting-edge AI technology to convert any photo into a nude version,” claims text on the site.
The potential for misuse is vast. Among the 65 video “templates” available on the site, there are various “undressing” videos depicting women removing clothing—as well as explicitly titled videos like “fuck machine deepthroat” and several “semen” clips. Each video is generated for a small fee, with additional costs for AI-generated audio.
The site, which WIRED has chosen not to disclose to mitigate further exposure, includes warnings advising users to only upload photos they have permission to alter with AI. It remains unclear whether any measures are in place to enforce this guideline.
Grok, the chatbot developed by Elon Musk’s ventures, has been utilized to create thousands of nonconsensual “undressing” or “nudify” bikini images—further institutionalizing and normalizing the practice of digital sexual harassment. However, it is merely the most prominent example and not the most explicit. For years, a deepfake ecosystem has emerged, encompassing numerous websites, bots, and apps, facilitating the automation of image-based sexual abuse, including the production of child sexual abuse material (CSAM). This “nudify” ecosystem, along with its detrimental effects on women and girls, is likely more advanced than many realize.
“It’s no longer just a crude synthetic strip,” explains Henry Ajder, a deepfake expert who has monitored the technology for over five years. “We’re discussing a significantly higher level of realism in the generated content, as well as a much wider array of functionalities.” Collectively, these services are probably generating millions of dollars annually. “It’s a societal plague; it’s among the darkest aspects of the ongoing AI and synthetic media revolution,” he remarks.
In the past year, WIRED has observed how several explicit deepfake services have rolled out new capabilities and swiftly expanded their harmful video generation offerings. Image-to-video models now typically require just one photo to create a short clip. A WIRED review of over 50 “deepfake” websites, likely attracting millions of views monthly, reveals that nearly all now provide explicit, high-quality video generation and often include dozens of sexual scenarios for women.
Additionally, on Telegram, numerous sexual deepfake channels and bots have frequently introduced new features and software updates, including various sexual poses and positions. For example, in June last year, one deepfake service promoted a “sex-mode,” featuring a message: “Experiment with different outfits, your preferred poses, ages, and more settings.” Another announced that “more styles” of images and videos would be released soon, allowing users to “create exactly what you envision with your own descriptions” through custom prompts to AI systems.
“It’s not merely about ‘You want to undress someone.’ It’s about ‘Here are all these alternate fantasy versions of it.’ It includes various poses and different sexual positions,” notes independent analyst Santiago Lakatos, who, along with the media outlet Indicator, has researched how “nudify” services typically leverage major tech company infrastructure and likely generate substantial revenue in the process. “There are versions where you can make someone appear pregnant,” Lakatos adds.
A WIRED investigation uncovered over 1.4 million accounts tethered to 39 deepfake creation bots and channels on Telegram. Following WIRED’s inquiry about these services, Telegram removed at least 32 of the deepfake tools. “Nonconsensual pornography—including deepfakes and the tools used to produce them—is explicitly forbidden under Telegram’s terms of service,” a spokesperson from Telegram stated, noting that it removes content upon detection and eliminated 44 million pieces of policy-violating content last year.
