Future Tech

Racial Profiling In Face Recognition Tech: Can Facial Recognition Be Racist?

Facial Recognition technology and its abilities have grown far beyond our imagination ever since new algorithms to reinforce this tech in law enforcement have come into prominence. Currently, face recognition technology is used by law enforcement forces at the lowest levels of data gathering processes to identify criminal offenders amid crowded gatherings. The tech uses footage from CCTV cameras in public places and streets and then runs the gathered data against agency archives to detect a face that is wanted for criminal offenses.

This technology has further been embedded into the smallest of gadgets, including mobile phones and smart wearable devices. Hence, it’s not only protecting you in the streets but also is intending to protect your personal information stored on your smart devices. Using “faceprint” for marketing and advertising practices has become common in the modern age of social media campaigning. And then, there is private surveillance in malls, retail stores, etc.

From this perspective, one can quickly point out the undeniable advantages of facial recognition tech. But it has also been scrutinized for the threat it poses to user privacy, data protection, and of course, transparency between the law and the public. It’s a good thing to be aware of both advantages and drawbacks of such an invasive tech. Yet, there is one more disadvantage of the face recognition tech, which people seem to ignore, and that is racial profiling and racial discrimination.

In this piece, we look at how this tech promotes racial bias and discrimination and how grave the repercussions of such invasive technology are.

How Facial Recognition Works?

Image Source: National Post

Step 1: A picture of you is taken from a camera, your account, email, etc. It’s either a straight profile picture or a random snap in a crowd.

Step 2: The face recognition software will run your face through a database of stored faceprints. The faceprint is gathered via geometrical tracking of your face.

Step 3: A match percentage of your picture against any known faceprint is produced using an algorithm over which a determination is made.

Automation Bias: One of Many Flaws of Facial Recognition Tech

Automation Bias or Machine Bias refers to the scenario where a machine algorithm exhibits a certain bias in the calibration of the input data, thus giving unfavorable output. This happens when there is an error in the algorithm code, lack of stored data-sets for calibration, incorrect input values, or excessive input data, which is beyond machines’ strength to calibrate.

How Racial Profiling Goes With all of This?

Image Source: The Guardian

Let’s start with an ancient incident which at the time was deemed insignificant. In 2001, Tampa City used face recognition software for surveillance over the crowded city as tourists flooded the city streets due to the 2001 Super Bowl. As per a New York Times report, the software identified 19 people who supposedly had outstanding warrants against them; however, no arrests were made as the infrastructure of the stadium made it impossible to get to the identified culprits among an overwhelming crowd.

While the signs of racial profiling were not seen anywhere in this particular case, it was the first time surveillance techniques were put up against the violation of civil liberties and privacy of individuals. In forthcoming years, Tampa Police gave up on these surveillance systems citing unreliable results.

Image Source: ICO

Fast-forwarding to a somewhat more recent scenario, Ali Breland reported for The Guardian, regarding an arrest of Willie Lynch, a black man accused of being a notorious drug dealer in Brentwood area, predominantly a neighborhood of people of color. The only evidence against Lynch was his pictures on a mobile, which were run against a police database before police determined him as the culprit. Lynch was convicted for eight years, who has now appealed against the conviction. Whether he was the alleged dealer or not, it inevitably raises concern over whether only a machine-based result is enough to uphold the conviction of anyone under investigation?

In 2019, as reported by Tom Perkins for The Guardian, Detroit police were found using face recognition to make arrests allegedly for the past two years. Detroit is a place where more than 80% of the population is black. A statement from a black member from Detroit Police Commission raised concerns against the practice. He said that black people have a common facial trait that jeopardizes the system’s algorithm, terming this as “techno-racism.”

Image Source: Vox

In a 2019 research for the Journal of Information Communication and Ethics Society, by Fabio Bacchini and Ludovica Lorusso, it was found that these biometric and face recognition systems are not 100% reliable to law enforcement. Moreover, racial discrimination was a negative impact on all such systems, which has further inverse societal implications. The study targeted western societies in particular, where such systems are used extensively for surveillance.

These are just three of many such examples where cases racial disparities caused by face recognition systems have come into the light. But why are these system so incompetent despite such growing accuracy in algorithmic coding upgrades in technology.

White Supremacy in the Western States: A White-Dominant Tech Industry

In 2014, a majority of tech companies, including the giant Apple Inc., were found to be hiring mostly white, male employees. In Apple, 55% of the employees were white, and similarly, Apple leadership comprised 63% of white employees. The companies that shared similar diversity reports included Facebook, Google, and Twitter as well. Five years later, a report in Wired disclosed that there had been minimal improvement in these numbers.

While Facebook showed a decent improvement in numbers, Apple’s black technical workers’ percentage was unchanged at a mere 6% of the total workforce. Amazon was the only organization that registered 42% black or Latin American workers in its US offices.

What dies these stats signify? In the US, most of the coders, who are assigned to major projects such as designing algorithms for surveillance systems, are white. These are the people who make the most significant decisions regarding a product or service to be launched/unveiled by a company. And hence, it’s their perspectives, approach, and thought processes that go in the final creation. This is not to imply that white people are racists and have purposely designed such surveillance systems. NO!

Image Source: Forbes

When a white guy designs a face recognition algorithm and has only white colleagues consulting/assisting him, they do not consider people of other color’s facial traits before finalizing the code. Since white engineers dominate the tech industry, the data archives used to prepare the initial code are also created and calibrated by white technicians. Thus, the code itself is created with a bias in its core calculative algorithm, resulting in these racial disparities in surveillance results.

The code simply learns what white people embody in it. There is no perspective or contribution of any person of another color.

The Calibration Issues

American law enforcement relies heavily on surveillance and data tracking. There have been many instances where whistle-blowers ousted information regarding unauthorized surveillance of civilians. Edward Snowden’s revelation of NSA’s illegal surveillance is one such example.

Image Source: CBS Local

These surveillance programs are supported by faceprints and other personal information of millions of citizens. If we only consider faceprints, there are millions of Americans openly sharing pictures on social media platforms. Then there are CCTV cameras in every street of the nation that offers live footage of hundreds of thousands of passers. Currently, there are approximately 117 million images in police databases, while the FBI has more than 400 million datasets to calibrate in surveillance face recognition algorithms.

Now imagine these datasets compared against a single image that may or may not have captured all facial traits of the particular person. In such a scenario, errors are likely to arise. There is just too much data to comprehend and run it against one faceprint. No algorithm can guarantee a hundred percent assurance in its result when calibration is so complicated. This eventually adds up to the racial profiling caused by face recognition tech.

The Immense Reliability on Face Recognition

Image Source: NY Post

The case of Willie Lynch is a reminder that face recognition should not be the sole reliable technique presented as evidence when it comes to law enforcement. This is the reason Tampa city police gave up on the tech.

It is true that face recognition is an excellent resort and is helpful to the police. Boston Marathon Bombings culprits were recognized using extensive and detailed analysis of surveillance recordings. But this can’t be the singular evidence to convict anyone. There must be supporting evidence to prove the results of the face recognition algorithms, and the concept of automation bias must be considered before reaching a final determination.

The Hardware Trouble: Face Recognition in Mobile and Cameras

Image Source: TechCrunch

The surveillance camera systems and associated hardware and software is not designed by one single company. It’s an industry worth billions of dollars wherein tens of corporations compete to get contracts from law enforcement agencies. Many of these systems are from Chinese manufacturers. It’s all about getting the cheapest tech with the best qualities. That’s how it mostly works. And therefore, there are always chances of differences in calibration of different systems, as well as variation in the quality of surveillance results. Many camera surveillance algorithms are ineffective in calibrating images of people of color just because of technical incompetence, thus glorifying racial discrimination.

The tech issues causing racism via face recognition have also been noticed in the Apple Face Lock feature. A case from China ousted that the iPhone X face lock was unable to differentiate between two different Chinese coworkers, making the feature useless. Similar reports were dismissed, citing issues in the feature in separating two black people from one another. As stated above, Apple has just 6% of black people in technical teams. It’s a clear example of how a face recognition tech can promote racism even in our handheld devices.

Conclusion

Yes, Facial Recognition is racist, and that is common knowledge now. While technology is growing daily to rectify such issues, the results are all the same. Technology is supposed to unite the world over common goals of technical advancements and development, but some techniques are just causing harm to racial and communal harmony.

For now, the best thing law enforcement officials can do is not support their cases based on evidence from algorithmic calibrations, which aren’t even reliable. Moreover, it’s high time that diversity and inclusion in workplaces is taken seriously so that people of all ethnicities can come together to create a product that is free of racial disparities. There are thousands of races in the world, and people have grown to set the racial differences aside, which haunted the global society for so long. If that must be maintained, then the machines on which we rely ourselves so much have to be taught the same.

Leave a comment