As reports emerge of police using facial recognition to target protestors, tech companies pause use until Congress acts and bias technology improves.
In 2016, Amazon's cloud-computing group Amazon Web Services (AWS), released Rekognition, a software service running on its platform that is capable of identifying objects in still images and video, while also digitally matching a face with images in a database. Rekognition is not the only such software, as several other big tech companies such as IBM, Microsoft and Google have similar capabillties.
This facial recognition technology, particularly the Rekognition program, has been found to be flawed and harmful. ACLU tested it in 2018 by running faces of sitting U.S. Congresspeople through the program. The result showed twenty-eight members were incorrectly identified as people who had been arrested for committing crimes.
This human-developed software is also prone to prejudices, as several research studies have shown that facial recognition software holds racial and gender bias. One such report made by Joy Buolamwini on facial-analysis systems, tested a gender-balanced data set of over a thousand politicians from Africa and Europe. When she used it to test the accuracy of AI systems from IBM and other tech companies, the findings showed that their accuracy varied greatly with gender and skin color. The error rate for lighter-skinned males was at 1 percent, while darker-skinned female faces were inaccurate 35 percent of the time, often identifying individuals with the wrong gender entirely.
Aside from bias, there is a potential great danger that this kind of powerful new software would misidentify people, undermining human rights. To make its point, the American Civil Liberties Union highlighted Amazon's relationships with a sheriff's office in Oregon and the city of Orlando, where neither agency could rely on the imperfect software.
This technology is now being used in mass surveillance, policing and racial profiling. Recognizing this, IBM Chief Executive Officer Arvind Krishna announced in his letter to Congress that the company will no longer offer facial recognition or analysis software. They firmly oppose and will not condone uses of any technology, including facial recognition, for any purposes which are not consistent with their values and Principles of Trust and Transparency. He also expressed IBM's willingness to work with Congress in pursuit of justice and racial equity.
Right after this announcement, and as the pressure on tech companies builds up following the killing of George Floyd by a police officer in Minneapolis, Amazon followed suit and announced it is banning use of its facial recognition software by police for one year. They say they have advocated that governments should put in place stronger regulations to govern ethical use of this technology.
Pressure groups and bloggers continue to urge big tech companies to end the sale of this dangerous technology to police departments. One blogger noted that a day after his post was published, Microsoft announced it will not be selling facial recognition technology to police until a national law exists.
This goes to prove that finally, these companies recognize the dangers that "face recognition poses to Black and Brown communities and civil rights more broadly." Nicole Ozer, technology and civil liberties director with the ACLU in California also stated, “This surveillance technology’s threat to our civil rights and civil liberties will not disappear in a year. Amazon must fully commit to a blanket moratorium on law enforcement use of face recognition until the dangers can be fully addressed, and it must press Congress and legislatures across the country to do the same."
Without appropriate legislative guardrails and implicit biases in the software, the United States does not seem immediately prepared for such facial recognition technology.