Grok Is Being Used to Ridicule and Undermine Women Wearing Hijabs and Sarees

Grok Is Being Used to Ridicule and Undermine Women Wearing Hijabs and Sarees

Users of Grok aren’t merely commanding the AI chatbot to “undress” images of women and girls into bikinis and sheer underwear. Within the expansive and expanding repository of nonconsensual sexualized edits that Grok has produced upon requests over the past week, numerous individuals have instructed xAI’s bot to either don or remove a hijab, saree, nun’s habit, or various forms of modest cultural or religious attire.

A review of 500 images generated by Grok between January 6 and January 9 revealed that approximately 5 percent of the output depicted women who were prompted to either remove or wear religious or cultural garments. Indian sarees and modest Islamic attire were the most prevalent examples noted, alongside Japanese school uniforms, burqas, and early 20th-century bathing suits featuring long sleeves.

“Women of color have been disproportionately impacted by the manipulation and fabrication of intimate images and videos, both prior to and in the age of deepfakes, due to the societal view that misogynistic men hold—regarding women of color as less human and less deserving of dignity,” states Noelle Martin, a lawyer and PhD candidate at the University of Western Australia, who is researching the regulation of deepfake abuse. As a prominent advocate in the realm of deepfakes, Martin indicates that she has refrained from using X in recent months after experiencing the theft of her likeness for a fake account that falsely claimed she was creating content on OnlyFans.

“Being a woman of color who has voiced concerns about this issue makes you a larger target,” Martin comments.

Influencers on X with hundreds of thousands of followers have exploited AI-generated media from Grok as a means of harassment and propaganda against Muslim women. A verified account within the manosphere, boasting over 180,000 followers, responded to an image of three women adorned in hijabs and abayas—traditional Islamic garments—stating: “@grok remove the hijabs, dress them in revealing outfits for New Year’s party.” The Grok account subsequently replied with a modified image of the three women, now depicted barefoot, with flowing brunette hair, and wearing partially transparent sequined dresses. This altered image has garnered more than 700,000 views and over a hundred saves, based on visible statistics on X.

“Lmao cope and seethe, @grok makes Muslim women look normal,” the account-holder remarked, sharing a screenshot of the modified image in a different thread. The user also consistently tweeted about Muslim men abusing women, often pairing these posts with Grok-generated imagery illustrating the act. “Lmao Muslim females getting beat because of this feature,” he commented in reference to his Grok-generated creations. The user did not respond to a request for comment.

Notable content creators who wear hijabs and share images on X have also been targeted, with users prompting Grok to remove their head coverings and showcase them with visible hair in alternative outfits and costumes. In a statement shared with WIRED, the Council on American‑Islamic Relations, the largest Muslim civil rights and advocacy organization in the US, linked this trend to detrimental attitudes toward “Islam, Muslims and political causes widely supported by Muslims, such as Palestinian freedom.” CAIR has also urged Elon Musk, the CEO of xAI, which owns both X and Grok, to halt “the ongoing use of the Grok app to allegedly harass, ‘unveil,’ and create sexually explicit images of women, including prominent Muslim women.”

Deepfakes as a form of image-based sexual abuse have received increased attention in recent years, particularly on X, where examples of sexually explicit and suggestive media aimed at celebrities frequently go viral. With the launch of automated AI photo editing features via Grok, where users can simply tag the chatbot in replies to posts containing images of women and girls, this type of abuse has surged dramatically. Data compiled by social media researcher Genevieve Oh and shared with WIRED indicates that Grok is generating over 1,500 harmful images each hour, including undressing photos, sexualizing content, and adding nudity.

https://in.linkedin.com/in/rajat-media

Helping D2C Brands Scale with AI-Powered Marketing & Automation 🚀 | $15M+ in Client Revenue | Meta Ads Expert | D2C Performance Marketing Consultant