CBP Finalizes Agreement with Clearview AI to Utilize Facial Recognition for Strategic Targeting

United States Customs and Border Protection plans to allocate $225,000 for a year’s access to Clearview AI, a facial recognition technology that matches images against billions of pictures collected from the internet.
This agreement extends Clearview access to the Border Patrol’s headquarters intelligence division (INTEL) and the National Targeting Center, which gather and analyze data in what CBP describes as a coordinated strategy to “disrupt, degrade, and dismantle” individuals and networks identified as security risks.
The contract indicates that Clearview offers access to “over 60+ billion publicly available images” and will be utilized for “tactical targeting” and “strategic counter-network analysis.” This suggests that the service is meant to be integrated into the daily intelligence operations of analysts rather than being used for isolated investigations. CBP emphasizes that its intelligence units rely on a “variety of sources,” including commercially available tools and publicly accessible data, to identify individuals and map their connections for national security and immigration purposes.
The agreement anticipates that analysts will manage sensitive personal information, including biometric identifiers like facial images, and necessitates confidentiality agreements for contractors with access. It does not clarify what types of photos agents can upload, whether searches might include U.S. citizens, or how long uploaded images or search results will be stored.
The Clearview contract comes at a time when the Department of Homeland Security is under increasing scrutiny regarding the use of facial recognition in federal enforcement activities that extend beyond border control, including extensive actions in U.S. cities that may include U.S. citizens. Civil liberties organizations and lawmakers have raised concerns about whether facial recognition tools have become standard intelligence infrastructure rather than limited investigative aids, and whether protections have kept pace with this expansion.
Recently, Senator Ed Markey introduced a bill that would prohibit ICE and CBP from employing facial recognition technology entirely, citing worries that biometric surveillance is being adopted without clear limitations, transparency, or public consent.
CBP did not promptly answer inquiries regarding the integration of Clearview into its systems, the types of images agents are permitted to upload, and whether searches might encompass U.S. citizens.
Concerns have also been raised about Clearview’s business model, which relies on large-scale scraping of photos from public websites. These images are turned into biometric templates without the knowledge or consent of the individuals depicted.
Clearview is also featured in the DHS’s recent artificial intelligence inventory, linked to a CBP pilot that began in October 2025. The inventory entry associates the pilot with CBP’s Traveler Verification System, which performs facial comparisons at entry points and other border-related screenings.
CBP states in its public privacy documentation that the Traveler Verification System does not utilize information from “commercial sources or publicly available data.” It seems more likely, at launch, that Clearview access would be linked to CBP’s Automated Targeting System, which connects biometric galleries, watch lists, and enforcement records, including files related to recent Immigration and Customs Enforcement operations in areas far from the border.
Clearview AI did not respond immediately to a request for comment.
A recent evaluation by the National Institute of Standards and Technology, which assessed Clearview AI among other vendors, found that facial recognition systems can perform well with “high-quality visa-like photos” but struggle in less controlled environments. Images taken at border crossings that were “not originally intended for automated facial recognition” presented error rates that were “significantly higher, often exceeding 20 percent, even with the more accurate algorithms,” according to federal scientists.
The testing highlights a fundamental limitation of the technology: NIST discovered that facial recognition systems cannot minimize false matches without also raising the risk of failing to recognize the correct individual.
As a result, NIST suggests that agencies may use the software in an “investigative” environment that generates a ranked list of potential matches for human verification rather than a single confirmed match. However, when systems are configured to always return multiple candidates, searches for individuals not already in the database will still produce “matches” for examination. In such situations, the outcomes will invariably be 100 percent inaccurate.
