Grok is Promoting AI’s Mainstream “Unveiling”

Grok is Promoting AI's Mainstream "Unveiling"

Elon Musk has not intervened to stop Grok, the chatbot from his AI company xAI, from generating sexualized images of women. Following last week’s reports that the image generation tool on X was being misused to create sexualized images of children, Grok has potentially produced thousands of nonconsensual images of women in “undressed” and “bikini” settings.

According to a WIRED review of Grok’s publicly shared live output, the chatbot continues to produce images of women in bikinis or underwear in response to user prompts on X every few seconds. On Tuesday, analysis revealed that Grok published at least 90 images of women in swimsuits and various degrees of undress in under five minutes.

Although the images lack nudity, they involve the Musk-owned chatbot “stripping” clothes from photos uploaded by other users on X. Users often attempt to bypass Grok’s safety features by requesting edits to make women appear in a “string bikini” or a “transparent bikini,” though these efforts are not always successful.

While AI-generated image technology has been utilized to digitally harass and abuse women for years—typically referred to as deepfakes and enabled by “nudify” software—the continued use of Grok to generate massive quantities of nonconsensual imagery stands out as one of the most mainstream instances of abuse so far. Unlike specialized nudify or “undress” software, Grok does not require payment for image generation, delivers results in seconds, and is accessible to millions of X users, all of which may contribute to normalizing the production of nonconsensual intimate imagery.

“When a company offers generative AI tools on their platform, it is their duty to mitigate the risk of image-based abuse,” states Sloan Thompson, director of training and education at EndTAB, an organization dedicated to combating tech-facilitated abuse. “What’s concerning here is that X has done the opposite. They’ve embedded AI-driven image abuse directly into a mainstream platform, facilitating sexual violence and making it more scalable.”

The creation of sexualized imagery by Grok began to gain traction on X at the end of last year, although its capability to produce such images has been recognized for several months. Recently, users have targeted photos of social media influencers, celebrities, and politicians by replying to posts from other accounts and requesting Grok to modify shared images.

Women who have shared their own photos have experienced replies from accounts successfully asking Grok to convert the images into “bikini” versions. In one case, multiple X users requested Grok to alter a photo of the deputy prime minister of Sweden to depict her in a bikini. Reports indicate that two UK government ministers have also been “stripped” to bikini images.

Images on X illustrate formerly clothed photos of women—such as one person in an elevator and another in the gym—being transformed into images featuring minimal clothing. A typical request reads, “@grok put her in a transparent bikini.” In another series of requests, a user prompted Grok to “inflate her chest by 90%,” then “inflate her thighs by 50%,” and finally to “change her clothes to a tiny bikini.”

An analyst who has monitored explicit deepfakes for years, requesting anonymity for privacy reasons, suggests that Grok may have emerged as one of the largest platforms distributing harmful deepfake images. “It’s completely mainstream,” the researcher asserts. “It’s not just a shadowy group [creating images]; it’s literally everyone, from all walks of life, posting from their main accounts. There’s zero concern.”

https://in.linkedin.com/in/rajat-media

Helping D2C Brands Scale with AI-Powered Marketing & Automation 🚀 | $15M+ in Client Revenue | Meta Ads Expert | D2C Performance Marketing Consultant