OpenAI’s Sora 2 video generator is under intense scrutiny as users exploit the tool to create disturbing, fetishistic content featuring photorealistic AI-generated children. Despite initial safety promises, a surge of “edgelord” parodies and suggestive advertisements has bypassed platform guardrails, flooding social media feeds and triggering urgent warnings from child safety advocates globally.
The Rise of Suggestive AI Parodies on TikTok
Digital creators are increasingly using Sora 2 to produce “fake commercials” that blur the line between satire and predatory content. One widely circulated clip features a photorealistic young girl promoting a “Vibro Rose” toy—a device that mimics the appearance and branding of popular adult products. This trend extends to other unsettling concepts, including toys that emit “sticky milk” or “goo,” and parodies referencing convicted criminals such as Harvey Weinstein and Jeffrey Epstein.
While some creators claim these videos are dark humor or social commentary, safety experts argue they serve a more sinister purpose. By using leading captions and specific keywords, these accounts often attract predatory audiences. Investigations by WIRED revealed that these videos frequently appear alongside comments requesting transitions to encrypted messaging apps like Telegram, a known hub for illicit networks.
Alarming Data: AI-Generated Abuse Material Doubles
The technical ease of generating lifelike minors has created a massive enforcement challenge for regulators. New data from the Internet Watch Foundation (IWF) indicates that reports of AI-generated child sexual abuse material (CSAM) doubled between 2024 and 2025. According to the IWF, 94% of these illegal AI images target girls, with over half of the content falling into the most severe legal categories of sexual activity.
Legislative Responses to Digital Exploitation
In response to this influx, the United Kingdom is amending its Crime and Policing Bill to allow “authorized testers” to verify that AI models cannot generate CSAM. Similarly, in the United States, 45 states have already enacted laws to criminalize AI-generated child abuse material. These legal frameworks aim to hold both creators and developers accountable as generative technology outpaces existing safety protocols.
The Failure of Contextual Moderation
OpenAI and Google (developer of the Veo video model) maintain strict policies against the sexualization of minors. OpenAI spokesperson Niko Felix stated that the company proactively bans accounts violating these terms and designs systems to refuse harmful requests. However, experts like Mike Stabile of the Free Speech Coalition argue that automated filters often fail to grasp “contextual nuance.”
“Anytime you’re dealing with kink or fetish, there will be things that people who are not familiar are going to miss,” Stabile noted. He emphasized that AI companies must diversify their moderation teams to recognize subtle fetish cues that automated systems currently overlook. While TikTok has removed dozens of flagged videos, many “Incredible Gassy” and “inflation fetish” clips—which often feature AI-generated minors—remain accessible, highlighting the persistent gap between corporate policy and platform reality.
Demand for “Safe by Design” Architecture
Child safety organizations are now calling for a fundamental shift in how AI video tools are built. Rather than reactive moderation, the IWF advocates for “safe by design” principles, where safeguards are integrated into the model’s core architecture to prevent the generation of harmful imagery at the prompt level. As Sora 2 moves toward a wider release, the pressure on OpenAI to close these loopholes continues to mount.
