What is Bias and how do we avoid it? Part 1

Man with data on a tablet - Percayso and MLP integration

Bias, in simple data science terms, can be thought of as a tendance for a model to stray from the “ground truth”. That’s a broad definition and some biases are simply irritations, but others are potentially highly illegal.

All data scientists try to eliminate biases to make their model a better, more accurate simulation of the real world.

Illegal biases arise when a model makes predictions which vary based on protected attributes, such as age or gender. For example if you were to price car insurance differently for two people whose data only varies by gender, then you have a biased model. You might wonder why anyone would feed gender into their model in the first place (it is illegal to consider gender in the UK for car insurance) but life is not always so simple!

Consider the case of a well-known laptop maker who decide to allow their laptops to be unlocked with facial recognition. At the time this was a radical, cool innovation. It worked by fixing key points on the persons face (like corners of eyes, sides of the mouth etc.) and using the distance between these points as a sort of mesh-like a fingerprint unique to each person! Sounds great… except that it transpired the AI couldn’t see the points on the face of a person of colour. The contrast needed to see them wasn’t sufficient and so it didn’t recognise that there was a face in front of it at all.

The cause of this was because they didn’t include people of colour in the training set and so the AI never learned to identify them and the engineers never saw it fail in their test set. So you might say, “just include more diversity in the training set and we’re done”! And yes, that is the first step towards avoiding bias but its not sufficient. I’ll explain why that is, but you need to understand how AI are trained first.

Let me explain (I promise there won’t be any maths!)

So a model starts life in a completely random state. It will attempt to make a prediction on one example from the training set and will almost certainly be wrong. It repeats this for the whole training set, recording its errors and successes as it goes. At the end it makes a retrospective change to its own internal weights and decision-making systems (aside: these are often confusingly called “biases” but nothing to do with the bias we are talking about here). If that makes it “better” in the next run, it keeps those changes. In the case of facial recognition “Better” would be defined by most people as “more faces recognised correctly”.

With that (abridged) explanation out of the way, how can a properly weighted training set lead to a biased AI? Well suppose the training set is 80% Caucasian, 10% Black, 5% Asian, 5% Other. Lets say that the AI can identify 73% of all the faces its presented with in the training set. And suppose that by attempting to include Asian faces, the AI gets inherently worse at recognizing Caucasian faces. An AI which stopped trying to identify Asians *at all* would get a boost when attempting to identify Caucasians. Since Caucasians make up 80% of the training set it is quite possible that this would make the AI “better” *overall* at recognizing faces compared to a more balanced approach.

Sorry, I lied about there being no maths. #NotSorry

In this way, despite having a balanced training set by defining “better” as an overall improvement in facial recognition we can find subgroups being biased against. In order to counter this effect we might *need* to inform the AI of the race of the face, not to bias it but specifically to CHECK for bias and to avoid it!