The art of the possible…

As automation ramps up and aggregators grow, consumers are looking for quick insurance covers. Automated risk selection is required, but traditional computer programming is not up to the task.

For years the insurance industry has tried to describe what a good risk looks like. We make our best attempt at it and accept that it’s a hard thing to explain conclusively to a computer. We can recognise bad risks when we see them, but concretely describing them is hard. This is why underwriters and actuaries exist and still review policies by hand.

At Machine Learning Programs (MLP), which is part of the Open GI Group, we work with clients to capture data, identifying predictive behaviours, generating powerful insights and turning that complex data into a valuable business asset. Machines can learn to distinguish good risks from bad, and in insurance this equates to improving your loss ratios and pricing as they do it.

Machine learning represents a major turning point in computing and is changing and shaping how the insurance industry is developing. In the last few years there’s been an acceleration in its adoption, and we are seeing brokers who have embraced AI and machine learning translate that into a competitive edge.

The potential for machine learning and AI in the sector is extensive, promising new levels of insight and convenience, however, it is fair to say that it is still relatively untapped by the majority of brokers.

The problem is not everyone is experienced in computers, AI, maths or data science so at first it seems daunting and complex. Let me explain why machine learning matters and how it can help you.

So, what exactly is machine learning?

The easiest way to explain it is to tell you how it differs from programming used today.

In every computer language there are simple instructions like: “If A is true then do B, otherwise do C”. The programmer gives the computer a set of instructions and it starts at the top and works down, doing what it is told. It never varies, it never makes a judgement. It just does what it is programmed to do.

Have you ever seen a video of a robot following instructions yet oblivious to the carnage and chaos it’s creating around it? That gives you some idea of the brittleness of standard programming techniques. These techniques work well when you are absolutely certain what you want the machine to do; but not when you want it to use some subtlety or judgement.

So how is machine learning different?

Imagine a friend calls and says “I’m looking at a picture of a dog, or maybe it’s a cat. I can’t decide. Can you help me tell which?”. You might say “Does it have pointy ears? Or whiskers? Then it’s a cat, otherwise it’s a dog”. But dogs have pointy ears and whiskers too.

It turns out it’s very hard to explain to someone how to decide. But why is such a simple task hard to describe to someone else?

It’s a combination of lots of different things, where no one aspect can be pointed at as being definitive. With these combinations, 99.9% of the time you can tell cats from dogs by looking. It’s subtle, possibly indescribable, but you just know.

The computer is the same as the person looking at the picture. If you can’t explain how to identify them, you can’t traditionally code a computer to do it either.

How does machine learning achieve this?

Firstly, I need a pile of pictures of cats and dogs. Then I correctly label them “Cat” and “Dog”, and I feed them to the machine one at a time. The machine extracts many features and characteristics. It puts all this information, picture by picture through a huge matrix of weights and, because those weights are random, at first it gets it horribly wrong. It’s never seen a cat or a dog and really it has no idea what they are. So, it’s basically guessing.

But it has the answers you provided! It can tell if its decision was right or wrong. By pure chance it will get about 50% right and about 50% wrong. When it gets it right, the program goes back and strengthens the way it decided on that right answer. Whatever way it reached that conclusion, it will give that method greater weight in future. Inversely, when it guesses incorrectly, it reduces the weight and importance of that approach.

The computer program is, in effect, re-writing itself. No longer is it constrained to do what it was told to do by the programmer. Every iteration, every success and failure, it adjusts itself to be a little bit better than before. After many iterations of this self-adjustment, the computer can identify cats from dogs. It has “learned” to do so reliably!

Over trial and error (and an exceptionally large amount of data) it figures out the tiny details that we humans pick up on and recognise. Things we cannot even describe ourselves.

So how does this apply to insurance?

Using the same analogy, imagine cats are insurance claimants and dogs are excellent drivers. Or cats are fraudsters and dogs are honest people. Feeding a large amount of data with the answers to a machine learning algorithm allows it to pick up on the subtle, possibly indescribable, details and trends which identify one from the other. It turns out computers are rather good at it too – this is what my company does for a living!

There is no doubt when insurance is complemented by machine learning powerful insights can be generated from the data produced, reducing loss ratios and improving pricing decisions as they do it.