ICE and CBP’s Facial Recognition Tool Fails to Accurately Confirm Identities

ICE and CBP's Facial Recognition Tool Fails to Accurately Confirm Identities

The facial recognition application Mobile Fortify, currently utilized by United States immigration agents in various locations nationwide, is not equipped to reliably identify individuals in public spaces. Its deployment lacked the oversight typically associated with technologies affecting personal privacy, as indicated by documents examined by WIRED.

Launched in the spring of 2025 by the Department of Homeland Security, Mobile Fortify was intended to “determine or verify” the identities of individuals encountered by DHS officers during federal operations. Records reveal that DHS connected this rollout to an executive order signed by President Donald Trump on his inauguration day, which aimed for a “total and efficient” crackdown on undocumented immigrants through expedited removals, expanded detention, and funding pressures on states, among other methods.

Although DHS has consistently portrayed Mobile Fortify as a mechanism for identifying individuals via facial recognition, the app does not actually “verify” the identities of those stopped by federal immigration agents—a recognized limitation of the technology and a result of Mobile Fortify’s design and application.

“Every manufacturer of this technology and every police department’s policy makes it clear that facial recognition technology cannot provide positive identification, is prone to errors, and is only useful for generating leads,” states Nathan Wessler, deputy director of the American Civil Liberties Union’s Speech, Privacy, and Technology Project.

Records analyzed by WIRED reveal that the expedited approval of Fortify last May was facilitated by the dismantling of centralized privacy reviews and the quiet removal of department-wide restrictions on facial recognition—changes overseen by a former Heritage Foundation attorney and Project 2025 contributor, who now occupies a senior privacy role at DHS.

DHS has refrained from providing details about the methods and tools agents employ, despite ongoing requests from oversight officials and privacy advocacy groups. Mobile Fortify has been used to scan the faces of not just “targeted individuals,” but also confirmed US citizens and bystanders observing or protesting enforcement actions.

Reports have documented federal agents informing citizens that their faces were being captured via facial recognition and that their images would be added to a database without consent. Additional accounts describe agents escalating encounters based on accent, perceived ethnicity, or skin color—following up with face scans once the stop occurred. These instances highlight a broader trend in DHS enforcement towards low-level public encounters followed by biometric capture, like facial scanning, with minimal transparency regarding the tool’s use.

Fortify’s technology collects facial data hundreds of miles from the US border, enabling DHS to create nonconsensual face prints of individuals who, according to DHS’s Privacy Office, may be “US citizens or lawful permanent residents.” As with its deployment to agents from Customs and Border Protection and Immigration and Customs Enforcement, Fortify’s capabilities are primarily revealed through court documents and sworn agent testimonies.

In a federal lawsuit this month, attorneys representing the State of Illinois and the City of Chicago indicated that the app had been utilized “in the field over 100,000 times” since its inception.

During testimony in Oregon last year, an agent recounted that two photos of a woman in custody obtained through the facial recognition app returned different identities. The woman, handcuffed and looking down, prompted the agent to physically adjust her position for the first image, causing her to yelp in pain. The app identified a name and photo of a woman named Maria; a match the agent described as “a maybe.”

Agents called out, “Maria, Maria,” to elicit a response. When she did not react, they captured another photo. The agent stated the second result was “possible,” but added, “I don’t know.” When questioned about the basis for probable cause, the agent referenced the woman speaking Spanish, her presence with individuals who seemed to be noncitizens, and a “possible match” via facial recognition. The agent testified that the app provided no indication of the confidence level in the match. “It’s just an image, your honor. You have to examine the eyes, nose, mouth, and lips.”

https://in.linkedin.com/in/rajat-media

Helping D2C Brands Scale with AI-Powered Marketing & Automation 🚀 | $15M+ in Client Revenue | Meta Ads Expert | D2C Performance Marketing Consultant