Users Are Creating Alarming Videos Featuring AI-Generated Children with Sora 2

Users Are Creating Alarming Videos Featuring AI-Generated Children with Sora 2

On October 7, a TikTok user known as @fujitiva48 sparked a conversation with a thought-provoking question in their latest video. “What do you think of this new toy for little kids?” they queried to over 2,000 viewers, who encountered what seemed to be a parody of a TV commercial. The reactions were swift. “Hey, this isn’t funny,” one commenter noted. “The person behind this should be looked into.”

It’s clear why the video garnered such an intense response. The mock advertisement starts with a lifelike young girl showcasing a toy—pink, glittering, featuring a bumblebee on the handle. It’s identified as a pen while the girl and two others doodle on paper, narrated by an adult male voiceover. However, the object’s floral design, buzzing capability, and name—the Vibro Rose—strongly resemble that of a sex toy. An “add yours” button prompts viewers to share with the phrase, “I’m using my rose toy,” eliminating any remaining doubt. (WIRED reached out to the @fujitiva48 account for comment but didn’t receive a reply.)

The controversial video utilized Sora 2, the latest video creation tool from OpenAI, which was initially exclusive to select users in the US starting September 30. In just a week, clips like the Vibro Rose had transitioned from Sora to TikTok’s For You Page. Some other mock ads were even more explicit, with WIRED discovering various accounts sharing similar Sora 2-generated videos showcasing rose or mushroom-shaped water toys and cake decorators that dispensed “sticky milk,” “white foam,” or “goo” onto realistic representations of children.

In many jurisdictions, such material would warrant investigation if these depicted real children instead of digital representations. However, laws surrounding AI-generated fetish content involving minors remain ambiguous. Recent data from the Internet Watch Foundation in the UK for 2025 highlights a troubling increase in reports of AI-generated child sexual abuse material (CSAM), rising from 199 cases in January-October 2024 to 426 in the same timeframe in 2025. Notably, 56 percent of this content falls within Category A—the UK’s most severe classification, encompassing penetrative sexual acts, sexual acts with animals, or sadism. The IWF also reported that 94 percent of illegal AI-generated images tracked involved girls. (Sora does not seem to produce any Category A content.)

“Frequently, we observe real children’s appearances being exploited to create nude or sexual imagery, predominantly involving girls. This continues to be a concerning trend targeting girls online,” remarked Kerry Smith, CEO of the IWF, in conversation with WIRED.

The surge of harmful AI-generated content has prompted the UK to propose a new amendment to its Crime and Policing Bill, enabling “authorized testers” to verify that artificial intelligence tools do not produce CSAM. As reported by the BBC, this amendment aims to ensure that models include safeguards against specific images, particularly extreme pornography and non-consensual intimate materials. In the US, 45 states have enacted laws to penalize AI-generated CSAM, mostly within the past two years as AI generators continue to progress.

https://in.linkedin.com/in/rajat-media

Helping D2C Brands Scale with AI-Powered Marketing & Automation 🚀 | $15M+ in Client Revenue | Meta Ads Expert | D2C Performance Marketing Consultant