Law enforcement and intelligence agencies around the world use facial recognition technology and other AI in investigations to track targets’ movements and as evidence in prosecutions. While books and movies often portray this technology as highly advanced and foolproof, reality can be quite different.  Recent cases have demonstrated that facial recognition technology is far from foolproof and may lead to even more misidentifications when identifying people of color.

Companies such as Clearview AI have contracts with law enforcement agencies across the United States. Clearview scrapes billions of photos from social media for its database. Photos from LinkedIn, Facebook, Instagram and the public web are uploaded to its database. For an annual fee, law enforcement has access to a face-based search engine. Obviously, a number of factors affect the reliability of the search results. The quality and resolution of the surveillance photo used for the search, the ethnicity of the subject, and the number and age of the comparison photos can all affect the results.

The most troubling, however, is that it appears that in a number of cases law enforcement failed to use other investigative means before seeking arrest warrants for the individuals “identified” by facial recognition software. As Clearview’s CEO made clear to the New York Times recently, when its technology comes up with an “initial result” that should be the starting point in law enforcement’s investigation, not the conclusion. In other words, facial recognition technology should only be used as one investigative tool that may provide a lead in an investigation that must then be coupled with other basic investigative steps.

A recent example of this is the case of Randal Quran Reid who was driving near Atlanta when he was stopped and arrested for alleged thefts in Louisiana. The police in Baton Rouge and Jefferson Parish had apparently used facial recognition from store videos purporting to show Mr. Reid stealing valuable items. Warrants for his arrest were issued and Mr. Reid was held for days pending extradition for crimes he did not commit. The NYT reports that it seems likely that local police simply used the faulty facial recognition identification to obtain the arrest warrants. It is difficult to know for sure because many law enforcement agencies do not reveal that such technology was used, or that it was the sole or primary basis for the warrant.

What should officials in Louisiana have done before issuing an arrest warrant based on facial recognition?  First, actually compare the photos to Mr. Reid. According to the article, Mr. Reid is smaller, lighter and less muscular that the actual thief. Next, investigate to see if Mr. Reid was actually in Louisiana around the time of the thefts. How does one determine that? Examine credit card receipts for gas purchases or any other items in the state, travel records and EZ-Pass records, social media, license plate readers and the like. Conversely, look to see if there is evidence that Mr. Reid was in his home state during the thefts. In fact, had law enforcement checked, they would have discovered that Mr. Reid had never been to Louisiana.

Artificial intelligence is advancing at breakneck speed and law enforcement is eager to use all forms of technology to assist in their investigations. The key, however, is AI should only be used to “assistnot conclude. Traditional investigative techniques must be employed to corroborate and verify the alleged results of such database searches, lest more innocent people sit in jail hoping that their attorneys can free them.

Stahl Gasiorowski Criminal Defense Attorneys actively and aggressively protect clients’ rights and challenge the use of such AI based searches in pretrial hearings and trial. To contact Mr. Stahl, call 908.301.9001 for the NJ office and 212.755.3300 for the NYC office, or email Mr. Stahl at rgs@sgdefenselaw.com.