In the ever-evolving landscape of artificial intelligence, the promise of autonomous vehicles navigating our streets has captivated the imagination of many.
However, a groundbreaking study by researchers at King’s College in London and Peking University in China has cast a stark shadow over this technological utopia. Their findings reveal that, when it comes to identifying pedestrians, autonomous vehicle software exhibits a troubling bias, favoring lighter-skinned individuals over their darker-skinned counterparts.
This revelation stems from extensive testing of eight AI-based pedestrian detection systems employed by self-driving car manufacturers.
The results expose a disconcerting 7.5% disparity in accuracy when it comes to distinguishing between lighter and darker-skinned subjects.
Even more concerning is the discovery that the software’s ability to detect individuals with darker skin tones is further compromised under challenging lighting conditions, such as those encountered on dimly lit roads.
Children are also at a higher risk
Jie M. Zhang, one of the researchers involved in the study, emphasized that this bias is not limited to one specific scenario or location. Incorrect detection rates for dark-skinned pedestrians were found to increase significantly from 7.14% during the daytime to a troubling 9.86% at night.
The study also underscored a distinct discrepancy in the software's performance in detecting adults versus children, with adults being identified at a rate 20% higher than children.
Gender, however, shows only a 1.1% difference in detection accuracy.
In the era of AI, the concept of fairness holds paramount importance. Zhang stated, “Current provision for fairness in these systems is limited, which can have a major impact not only on future systems, but directly on pedestrian safety.”
She underlined that fairness is when an AI system treats privileged and under-privileged groups the same, which is not what is happening when it comes to autonomous vehicles right now.
This bias, whether intentional or inadvertent, has long plagued AI systems, but the implications within the realm of self-driving cars are particularly grave.
Bias requires immediate action
The paper, titled “Dark-Skin Individuals Are at More Risk on the Street: Unmasking Fairness Issues of Autonomous Driving Systems,” published on the preprint server arXiv, not only exposes a critical problem but also calls for immediate action.
Zhang advocates for the establishment of guidelines and regulations to ensure that AI data is implemented without bias. She asserts that the collaboration of automotive manufacturers and government bodies is imperative to objectively measure the safety of these systems, especially concerning fairness.
Source: Cornell University
As AI becomes increasingly integrated into our daily lives, from the vehicles we ride to interactions with law enforcement, the issue of fairness assumes an even greater significance.
In the face of these revelations, it is clear that addressing bias in autonomous vehicles is not merely a technological challenge but a matter of public safety that demands urgent attention.