After years of documenting our lives online, can we shield our identities from facial recognition databases? These PhD students may have a solution.
In recent years, it’s come to light that many companies are collecting people’s data and personal information for profit. While you may not mind this when it takes the form of targeted ads on your Instagram feed, applications such as facial recognition should raise red flags.
For context, a start-up called Clearview AI has amassed more than three billion photos from Facebook, YouTube, and countless other websites to share with local, state, and federal law enforcement.
In an attempt to safeguard photos from facial recognition, PhD students at the University of Chicago have developed a system called Fawkes. And yes, it is named after the Guy Fawkes mask popularized by the film, V for Vendetta, and worn by protestors around the world.
How Does it Work?
Fawkes makes pixel-level alterations to images using a database of celebrity faces to confound facial recognition systems’ capabilities. At this scale, the changes are not visible to the human eye, so users would still be able to upload quality images.
This isn’t a one-time fix though, as all images of a person would need to be cloaked by Fawkes for better protection. At that point, if an unaltered photo was shown to the model, it wouldn’t recognize you or trace it back to your Facebook or Instagram account.
Lack of Facial Recognition Regulation
So far, regulation on the use of facial recognition has been limited. On August 4th, Senators Bernie Sanders and Jeff Merkley introduced the National Biometric Information Privacy Act, which would require corporations to obtain written consent to gather biometric data. This includes facial appearance and other individual characteristics, such as DNA, fingerprints, and voice.
Racial Bias in Law Enforcement
Black men are more than 2.5 times more likely to be killed by law enforcement than white men in the United States. The use of unidentified law enforcement and brute force in the wake of George Floyd’s murder demonstrates the risk that facial identification could play in targeted harassment and violence towards protestors and activists.