Facial-recognition systems misidentified people of colour more often than white people, a landmark United States study shows, casting new doubts on a rapidly expanding investigative technique widely used by police across the country.
Asian and African American people were up to 100 times more likely to be misidentified than white men, depending on the particular algorithm and type of search. The study, which found a wide range of accuracy and performance between developers’ systems, also showed Native Americans had the highest false-positive rate of all ethnicities.
The faces of African American women were falsely identified more often in the kinds of searches used for police investigators, in which an image is compared with thousands or millions of others in hopes of identifying a suspect.
Algorithms developed in the US also showed high error rates for “one-to-one” searches of Asians, African Americans, Native Americans and Pacific Islanders.
Such searches are critical to functions like cellphone sign-ons and airport boarding schemes, and errors could make it easier for impostors to gain access to those systems.
Women were more likely to be falsely identified than men, and the elderly and children were more likely to be misidentified than those in other age groups, the study found. Middle-aged white men generally benefited from the highest accuracy rates.
The National Institute of Standards and Technology, the federal laboratory known as Nist that develops standards for new technology, found “empirical evidence” that most of the facial-recognition algorithms exhibit “demographic differentials” that can worsen their accuracy based on a person’s age, gender or race.
The study could fundamentally shake one of US law enforcement’s fastest-growing tools for identifying criminal suspects and witnesses, which privacy advocates argue is ushering in a dangerous new wave of government surveillance tools.
The FBI has logged more than 390,000 facial-recognition searches of state driver-licence records and other federal and local databases since 2011, federal records show.
But members of Congress this year have voiced anger over the technology’s lack of regulation and its potential for discrimination and abuse.
The federal report confirms previous findings from studies showing similarly staggering error rates.
Companies such as Amazon had criticised those studies, saying they reviewed outdated algorithms or used the systems improperly.
One researcher, Joy Buolamwini, said the study was a “comprehensive rebuttal” to sceptics of what researchers call “algorithmic bias”.
“Differential performance with a factor of up to 100?!” she told the Washington Post in an email. The study, she added, is “a sobering reminder that facial-recognition technology has consequential technical limitations alongside posing threats to civil rights and liberties”.
Investigators said they did not know what caused the gap but hoped the findings would, as Nist computer scientist Patrick Grother said, prove “valuable to policymakers, developers and end users in thinking about the limitations and appropriate use of these algorithms”.
Jay Stanley, a senior policy analyst at the American Civil Liberties Union, which sued federal agencies this year for records related to how they use the technology, said the research showed why government leaders should immediately halt its use.
“One false match can lead to missed flights, lengthy interrogations, tense police encounters, false arrests, or worse,” he said. “But the technology’s flaws are only one concern. Face-recognition technology — accurate or not — can enable undetectable, persistent, and suspicionless surveillance on an unprecedented scale.”
Nist’s test examined most of the industry’s leading systems, including 189 algorithms voluntarily submitted by 99 firms, academic institutions and other developers. The algorithms form the central building blocks for most of the facial-recognition systems around the world.
The algorithms came from a range of major tech companies and surveillance contractors, including Idemia, Intel, Microsoft, Panasonic, SenseTime and Vigilant Solutions. Notably absent from the list was Amazon, which develops its own software, Rekognition, for sale to local police and federal investigators to help track down suspects.
Nist said Amazon did not submit its algorithm for testing. Amazon did not offer comment but has said previously that its cloud-based service cannot be easily examined by the Nist test. Amazon founder and chief executive Jeff Bezos owns the Washington Post.