Facial 'recognition', accuracy and bad behaviour

For me, probably the most haunting scene in Coded Bias was the detention on the street of a 14-year old boy.

The misuse of state power is shocking when it is applied to a child. One cannot watch that clip without thinking that those officers are just bad human beings. The only justification for treating a child like that would be if they had witnessed something which made them think the 'suspect' was a risk to other people. Which clearly wasn't the case with that young lad.

Unfortunately a lot of the campaign focus against police use of facial recognition emphasises inaccuracy and false positives. That is not the problem here. Even if the system the police were using was 99.999% accurate, there would be a false positive rate of 1 in 100,000. In widespread use that would mean 1000s of people falsely detained.

What is wrong with what happened in the clip is entirely down to how the police reacted - in every case. As soon as they were notified of a 'match' they swooped as if on a criminal caught in the act. Why so aggressive? Because of what they believed was happening. Those officers on the ground were presumably briefed for the operation by being told that a 'highly accurate' (and remember, to those not good at thinking about false positives, 90% sounds really accurate) 'AI system' was 'identifying' people who the police were looking for. If that is what you believe, then when you are told of a match, you enter into the mindset apparently required to detain wanted criminals. (Though, seeing the 'match' was a 14 year old school pupil should have caused a rethink.)

Imagine if they were instead briefed: Many of the people this system 'identifies' will in fact be entirely innocent members of the public who are alarmed and distressed by being approached by the police. Then a stop might have started with a conversation like this:

Excuse me, you know we are testing a new system in this area and it has indicated I should talk to you. Would you mind providing me with your name and address?

But that clearly did not happen because what has gone wrong here starts with a misuse of language, which in turn leads to unjustified beliefs.

A so-called 'facial recognition system' does not recognise faces.

What it does is statistical pattern matching between mathematical objects (erroneously called 'faceprints' to make them sound physical like fingerprints) which have been created from photographs the police have collected somehow of people they want to speak to, and the photographs taken by the 'facial recognition camera'. That is absolutely nothing like what human beings do when they recognise faces. Faces are living, changing, context-sensitive biological items and recognising them takes sensitivity to all that. If someone is familiar, we may be able to 'recognise them in a photograph', but we know that is epistemically different as is shown by our reaction when we get it wrong.

Because this 'facial recognition' system is quite good at predicting - yes predicting - that the people who were the causal objects of, and depicted by, the two photographs used to create the mathematical objects are one and the same person, it becomes a reasonably good indicator of identity. That is all.

Further thought: 'recognising a face' as a human cognitive achievement has a moral significance, insofar as it is a significant and often decisive factor in the moral justifications of our actions. A very simple way to see this is to not the power that misrecognition (by a person of another) has in excusing what would otherwise be unacceptable behaviour.

For this reason I was really pleased to read that the use of Live Facial Recognition in Frasers Group shops is moderated by human 'super-recognisers' (also used at borders - they are specialists in recognising people from photographic depictions not in general facial recognition) and no action is taken unless they confirm the match. That is much better practice because it recognises that the 'AI' is only an assistive tool, as is the super-recogniser themselves, in the practice of human face recognition. I hope the police learn from it. It is a first step towards using the technology without becoming a bad person.

Of course, this still leaves the issue of who gets onto the watchlists and why, especially when they are being used in by business on private property - we all remember the lawyers banned from Madison Square Gardens for working for the wrong company - but that could also be made transparent. In fact, there is no reason the watchlist for Frasers Group couldn't be published on their website with a right of challenge. That would be a simple regulatory requirement which would make abuses of the technology much harder.


You'll only receive email when they publish something new.

More from Tom Stoneham
All posts