AI Facial Recognition: Peacekeeper Or Racist Big Brother?
The dangers of using AI-enabled Police.
The BBC recently covered a story about a lady called Sara (not her true name), who felt the consequences of AI going wrong. You see, she went into a store to buy a bar of chocolate, and in her own words, “Within less than a minute, I’m approached by a store worker who comes up to me and says, ‘You’re a thief, you need to leave the store’.” Sara had never stolen in her life, but this store’s AI facial recognition system, Facewatch, misidentified her as a previously convicted shoplifter. Despite her pleas, her bag was searched, and she was promptly escorted out of the store. Understandably traumatised by the experience, Sara told the BBC, “I was just crying and crying the entire journey home… I thought, ‘Oh, will my life be the same? I’m going to be looked at as a shoplifter when I’ve never stolen’.” Sara’s case is far from unique, and the issues with AI facial recognition systems go way past simple cases of misidentification. So, let’s dive down this rabbit hole.
These AI facial recognition systems are widely used worldwide, and for good reason: They appear to work miracles. Take the Brazilian soccer stadium that sold its tickets through facial recognition. Customers would buy their tickets online and scan their faces, turning their faces into their tickets. By cross-referencing these scans and purchasing information with police and public records, authorities at the stadium were able to apprehend 28 criminals, identify 253 missing people, and even prevent 42 people from entering the stadium as they were in violation of court orders at the gates of the stadium on match day.
But it goes wrong more than authorities let on.
For example, in 2017, the UK Police tried to use a similar system (that wasn’t linked to ticket sales) for the Champions League final. More than 2,000 people of the 170,000 attendees were wrongly identified as potential criminals. In 2018, Real Madrid tried the system and found that 92% of their 2,470 identified criminals were false positives. In 2019, Police in London trialled a similar system on the open streets and found that it misidentified members of the public as criminals 96% of the time! Private companies have also had horrific results from these systems. A UK shopping centre found the system returned a 100% false positive rate, which led to a 14-year-old black schoolboy being fingerprinted after being misidentified.
This brings me to the worrying reality that these systems have a racist leaning, potentially not intentionally. These systems are more than likely unconsciously biased to recognise features familiar to their programmer, who is statistically likely to be a white man. What’s more, if they are trained on data in which ethnic minorities are flagged as potential criminals at a higher rate than Caucasians, it will reflect this. As such, they will reflect and enforce any systematic racism in the country’s politics and justice system. What’s more, these systems struggle to recognise facial features with darker skin tones due to limitations of camera technology and the prevalence of exposing cameras to the luminosity of whiter skin tones. As such, even if you correct for the programmer and dataset biases, these systems are still more likely to misidentify people of colour than Caucasians.
As such, it’s no surprise that in recent years, automated AI facial recognition systems have led to the wrongful arrests of at least seven black people in the US. Furthermore, a 2023 Scientific America study found that these systems struggled to tell black people apart, leading law enforcement to disproportionally arrest black people, worsening racial inequities in policing, and many wrongful arrests of black men.
So, why are authorities adopting this profoundly flawed technology? And why did the Brazilian soccer stadium work so well?
Well, the soccer stadium wasn’t fully automated. Not only did they have the face scans, but they also had the person’s banking details, name, and possibly address, helping them to remove false positives. They also had time to comb through the customer and criminal database matches and ensure matches were correct, using a human worker to scrutinise the AI’s results. As such, the facial recognition system was only used to make the process of finding these criminals easier for the authorities, not doing it for them. In comparison, the systems used at other Football events, stores and by police in public don’t have a database of names, bank accounts, etc, to check the AI’s results against. As such, Police or store workers are simply acting upon bad information and possibly even exaggerating their own incorrect racial biases as to what a criminal looks and acts like.
This is an excellent example of what the Harvard Business Review has been saying for years: that AI should only be used to augment tasks, not replace tasks or workers. It is too susceptible to reflect and exaggerate our biases, and unreliable to fully automate anything remotely important. Sadly, we are using AI facial recognition to do a task that didn’t previously exist, and we are ill-equipped to moderate it. As shown by the Brazilian soccer stadium, it doesn’t have to be like this, and this technology can be used with correct checks and balances to fantastic effect. There is nothing inherently immoral, unjust or racist about this technology. But the way we are currently using it is immoral, unjust and racist.
Thanks for reading! Content like this doesn’t happen without your support. So, if you want to see more like this, don’t forget to Subscribe and follow me on BlueSky or X and help get the word out by hitting the share button below.
Sources: BBC, The Independent, ASIS, Capital B News, The Guardian, Scientific America