Using Machines to Find Criminals
- Nikita Silaech
- Dec 16, 2025
- 3 min read

Law enforcement has adopted facial recognition technology under the assumption that machines could do what humans struggle with, which is identify suspects accurately and without the biases that come from human judgment. The pitch sounds compelling on its surface. A surveillance camera captures a face. The system checks a database of millions of faces and returns candidates ranked by mathematical similarity. An investigator narrows the list. An eyewitness looks at photographs. Finally, a suspect is identified.
However, this process contains a hidden flaw that keeps producing innocent people in interrogation rooms, and it works in a way that most investigators never fully consider.
Facial recognition systems are not designed to return exact matches. They're designed to return lookalikes ranked by mathematical distance in feature space. When someone checks a database of hundreds of millions of faces, the system doesn't find the suspect. It finds the most similar candidates and presents them in order of similarity, which sounds helpful until you realize that the suspect might not be in the database at all, or their photograph might not have been taken from an angle that matches the surveillance image, or the database itself might contain systematic gaps that favor some faces over others.
When an investigator presents these lookalike candidates to an eyewitness as if they've narrowed the field to promising options, the eyewitness is already being guided toward a conclusion. They look at the photographs that someone official has suggested are worth their attention, and they often cannot distinguish between the actual suspect and an innocent lookalike ranked highly by the algorithm.
We see this scenario play out when we examine what actually causes wrongful convictions in the United States. Erroneous eyewitness identifications remain the leading cause, not DNA contamination, false confessions or inadequate legal representation but mistaken eyewitness identification, which accounts for more wrongful convictions than any other factor by a significant margin (Georgetown Law, 2021).
When facial recognition gets used as a tool to strengthen eyewitness investigation, it has instead become a mechanism that manufactures the exact type of identification error that lands innocent people in prison for crimes they didn't commit.
The deeper problem is in how law enforcement has adopted this technology without building in any circuit-breakers between the system's output and the eyewitness identification. If eyewitness identification is unreliable on its own, and facial recognition produces lookalike candidates ranked by similarity, which investigators present to eyewitnesses as if they've already been vetted by the system, then nothing in this chain corrects human error. It formalizes it instead. The investigator outsources the decision to the witness, the witness confirms what they've been suggested, and the technology gets credited for an identification it technically generated but never independently verified.
Some jurisdictions have begun implementing safeguards that actually address this problem. A few require that facial recognition candidates be presented to eyewitnesses alongside fillers, which are photos of people the system ranked lower and knows are innocent, so that investigators can test whether the eyewitness can actually distinguish the suspect from imposters (Council on Criminal Justice, 2025).
Others mandate that investigators document whether an eyewitness independently identified someone or was influenced by the system's ranking. A handful have moved toward requiring corroborating evidence before facial recognition results in arrest, which at least interrupts the flow from algorithmic suggestion directly to prosecution.
But these safeguards remain optional in most places. In most jurisdictions, facial recognition operates without any transparency into how the algorithm weights certain facial features, how the database was constructed, whether the system has been tested for accuracy across different racial and ethnic groups, or whether the false positive rate varies depending on the person's race or gender (Statewatch, 2025).
The European Union banned predictive policing explicitly, yet law enforcement across Europe continues using algorithms to predict where crimes will occur and profile individuals as potential criminals, often without public knowledge or oversight, which suggests that restrictions on one type of algorithmic system don't necessarily prevent the adoption of similar systems under different names.





Comments