The Dark Side of AI: Racial Bias and False Arrest

An Assistant Professor of Law at the University of Alberta's Faculty of Law is shedding light on the numerous risks associated with facial recognition technology.

Reading Time: 2 minutes

ai leading to false arrests

Illustration: Lenka T.

Dr. Gideon Christian, an esteemed expert in the field of AI and the law, issued a stern warning in a press release from the institution. He emphasized that racially biased artificial intelligence (AI) isn’t just misleading. It can have profoundly detrimental consequences, potentially ruining people’s lives. 

Dr. Christian’s pivotal research project, titled Mitigating Race, Gender, and Privacy Impacts of AI Facial Recognition Technology, has received a substantial grant of $50,000 from the Office of the Privacy Commissioner Contributions Program. This initiative aims to scrutinize racial issues within AI-based facial recognition technology in Canada. 

He highlighted the misguided belief that technology, unlike humans, is impartial, asserting that “there is this false notion that technology, unlike humans, is not biased. That’s not accurate.” 

People of color are at risk 

According to Dr. Christian, facial recognition technology poses a significant threat, particularly to individuals of color. He cautioned that technology has been shown to have the capacity to replicate human bias.  

In certain facial recognition systems, there is an accuracy rate of over 99% in recognizing white male faces. However, regrettably, when it comes to recognizing faces of color, especially those of Black women, the technology exhibits its highest error rate, which stands at approximately 35%. 

The ramifications of such errors can be severe. As Dr. Christian pointed out that facial recognition technology can erroneously match your face with that of someone who may have committed a crime. This can lead to a scenario where you find yourself facing the police at your doorstep, being arrested for a crime you never committed. 

While most cases of misidentification are frequently reported in the United States, Dr. Christian cautioned that Canada is not immune to such issues. He stated that researchers are aware that various police departments in Canada are employing this technology. The absence of similar cases to those in the US may be attributed to the discreet use of this technology by Canadian law enforcement. Consequently, records may either not exist or may not be publicly disclosed. 

Dr. Christian highlighted instances in Canada where individuals, particularly Black women and immigrants who have successfully obtained refugee status, have had their refugee status revoked based on facial recognition technology matching their faces to others. This has led the government to argue that they made claims using false identities, despite belonging to the demographic group most affected by the technology’s high error rate. 

Free of bias 

Dr. Christian clarified that technology itself is not inherently biased, attributing the issue to the data used to train machine learning algorithms. He emphasized that technology generates outcomes based on the information it is fed. 

He also stressed the importance of striving for a society free from racial bias and discrimination. He stated that the majority of people want to live in a society that is free of racial bias and discrimination. There must not be situations where artificial intelligence technology subtly perpetuates racial bias. 

Christian also pointed out that this issue is not entirely new. Rather, it is an age-old problem now manifesting itself in new ways through artificial intelligence technology. He warned that if left unaddressed, it could undermine the progress achieved through the civil rights movement and erode years of hard-fought advances. 

"Ever tried. Ever failed. Never mind. Try again. Fail better."