People Are Utilizing AI to Incorrectly Identify the Federal Agent Involved in the Shooting of Renee Good

In the hours following the shooting of Renee Nicole Good, a 37-year-old woman in Minneapolis by a masked federal agent, social media users began circulating AI-manipulated images that falsely claim to “unmask” the officer, exposing their true identity. The agent was later confirmed as an Immigrations and Customs Enforcement officer by Tricia McLaughlin, a spokesperson for the Department of Homeland Security.
The incident took place on Wednesday morning, with social media footage capturing two masked federal agents approaching an SUV parked on the road in a Minneapolis suburb. One officer appears to request the driver exit the vehicle before manipulating the door handle. At this moment, the driver seems to reverse before driving forward and turning. A third masked federal officer near the front of the vehicle draws a gun and fires into the vehicle, resulting in Good’s death.
Initial videos of the shooting shared on social media did not show any masked ICE agents without their masks. However, within hours after the incident, altered images of an unmasked agent began to circulate online.
These images appear to be screenshots from the original video footage, but they have been modified using AI tools to generate the officer’s face.
WIRED examined various AI-altered images of the unmasked agent that appeared on multiple mainstream social media platforms, including X, Facebook, Threads, Instagram, BlueSky, and TikTok. “We need his name,” Claude Taylor, founder of the anti-Trump Mad Dog PAC, posted on X alongside an AI-altered image of the agent. This post has garnered over 1.2 million views. Taylor did not reply to a request for comment.
On Threads, an account named “Influencer_Queeen” shared an AI-altered image of the agent, stating: “Let’s get his address. But only focus on HIM. Not his kids.” The post was liked nearly 3,500 times.
“AI-powered enhancement often tends to hallucinate facial features, resulting in an enhanced image that may look clear but lacks accuracy in biometric identification,” explained Hany Farid, a UC-Berkeley professor who has studied AI’s capabilities in enhancing facial images, to WIRED. “In cases where half of the face is obscured, AI, or any technique, cannot accurately reconstruct the facial identity.”
Some users sharing these images also claimed, without any proof, to have identified the agent, disseminating the names of real individuals and often linking to their social media profiles.
WIRED has confirmed that two of the names circulating do not appear to be connected to anyone affiliated with ICE. While many posts sharing these AI images have received limited engagement, some have achieved considerable attention.
Among the names circulated without evidence is Steve Grove, the CEO and publisher of the Minnesota Star Tribune, who previously worked in the administration of Minnesota Governor Tim Walz. “We are currently monitoring a coordinated online disinformation campaign incorrectly identifying the ICE agent involved in yesterday’s shooting,” stated Chris Iles, vice president of communications at the Star Tribune, to WIRED. “To be clear, the ICE agent has no known connection to the Minnesota Star Tribune and is certainly not our publisher and CEO Steve Grove.”
This incident is not the first time AI has sparked controversy in the wake of a shooting. A similar situation arose in September when Charlie Kirk was killed, and an AI-enhanced image of the shooter, based on grainy surveillance footage released by law enforcement, circulated widely online. The resulting AI image bore no resemblance to the individual eventually captured and charged with Kirk’s murder.
