In the ever-evolving landscape of artificial intelligence, there’s a term you’re likely to encounter more frequently – ‘explainability.’ If you haven’t crossed paths with it yet, don’t worry; you’re about to embark on a journey to unravel its significance. Today, we embark on a quest to unravel the essence of AI explainability – what it encompasses, and why it’s a subject deserving of your attention.

What is explainability?

Explainability is exactly what you think it is. Can you, the operator of the AI, explain why your AI just made the decision it did? Can you explain its reasoning?

You might be forgiven for thinking that it should be straightforward. After all, AI is built from computer code. Computers only do what we tell them to, right? Well, no… that’s where AI diverges from regular computer programming and this is what creates the ‘explainability problem’. Traditional computer programs say things like “If X is greater than 5 do A, otherwise do B”. That’s easy to explain, if the computer did A it’s because X was greater than 5.

AI coding is more like this “Work out for yourself what optimum threshold of X should trigger A”. That’s a lot harder to explain in reverse. It’s still possible but not straightforward. We can dig around and discover that during training the AI determined that 4.65 was the optimum break point for X. Anything above that and it does A, otherwise it does B.

 Understanding Neural Networks and AI Decision-Making

This is where Deep Learning enters the conversation. Deep Learning typically refers to Neural Networks. huge matrix of numbers simulating the varying strength and power of neurons in a biological brain. In the above example, X would be passed through that matrix, multiplied by all the numbers in the matrix, and then summed up. If the answer is more than a threshold, we do A. If not, we do B. This is the same as before. However, the question is how can you explain that? Just like no single neuron in your brain is responsible for any single decision, so it is with Neural Networks. They are too messy and chaotic to be explainable.

But even if something is inexplicable at the code level, we can still examine its responses and make reasoned evaluations about them. My company, Machine Learning Programs, has been doing just that. Let’s take an AI that is predicting propensity to claim. We can feed it a few dozen pieces of information about the driver (their age, their yearly mileage, etc.) and it will return a probability that they will claim in the next 12 months.

Suppose we take exactly the same data but increase the age by a year. We get a very similar but slightly different result. Now suppose we keep the age the same but change the mileage? There is a different result again. Suppose we change both. Suppose we change each one a LOT.

Each time we change a variable, we get a result that is different from the original. We can graph how different it is by how much one or other variable changes. So now we know the relative importance of that variable (or the combination) to the result. Rinse and repeat this for every piece of data we feed into the AI and we can begin to reason why we get a particular result.

Moreover, we’ve developed this into an English language response, so that the AI can explain what is affecting its decision itself.

For example, the model can tell you, in English, the main reason policyholder A had an MLP score of X, is because they were above 65 and lived in an area with a traffic density higher than the majority of the population, whilst driving a modified car and drove significant business mileage.

That’s a powerful explanation to give you an understanding of why the AI is doing what it’s doing.

Would you get a better explanation if you asked a human underwriter?