The era of ChatGPT is upon us. It is the end times… Well, many would have you believe so. Let’s talk about what’s reality and what’s hype.

GPT-4, the model used in ChatGPT, and other such models (including Bard, LlaMa, etc.) are a type of AI called Large Language Models (LLMs). They are so good at language as to appear to be hiding some kind of intelligence. So much so, one of Google’s own engineers was fired for claiming their chatbot was sentient.

The reality is more nuanced and less dramatic. Technically speaking, ChatGPT and other LLMs are not doing much more than what your phone does when it predicts the next word in the sentence you are writing and helpfully suggests it. It’s just doing that on a far bigger scale and with a lot more computing power and training data behind it. The model looks at the context of what you asked it to produce and starts to write a response, predicting each word as it goes. It chooses the most likely word as dictated by its training set. Unlike your phone, it is able to consider the context of the conversation. It can pay attention to prior phrases, sentences, and topics.

That’s why a very large training set is required for these systems to work. ChatGPT’s training set runs into trillions of examples.

Scale can be transformative. Jet planes do the same job as walking. Facebook does nothing the postal service couldn’t do – just a billion times faster. Scale matters – scale can make a technology revolutionary. Large Language Models are just that; large. Very large. From that scale has emerged an AI that is very impressive in its ability to hold conversations.

But that’s not what impresses experts in the field, as we know it was trained to learn about conversations and it performs well at the task. What has experts excited is that in scaling up the model, it has acquired abilities no one would have predicted… The ability to do complex math, for example, or computer programming. These are “emergent phenomena”, i.e. Things we didn’t train it to do but it learned anyway. The obvious concern is that we didn’t ask it to learn these things and it makes people somewhat uneasy that it has. Emergent phenomena is nothing new – we’ve known models can do this for a long time, but with a model the size of ChatGPT and remembering what scale can do, it should give us a moment’s pause when training models this big.

But ChatGPT has its limitations too. We’ve seen ChatGPT grow from a toy in GPT V2, to an industry disrupter in the ChatGPT variant of the model. The scale of training material has grown hugely in those iterations as I mentioned above, but that well has run dry now. OpenAI themselves say that the huge advances from simply adding more training data are at an end, and future improvements will focus on things like accuracy and ethics. Given that the AI is only moderately good at writing and answering questions right now, this doesn’t give me any fear that we are about to see a worldwide revolution in the industry beyond areas like writing blogs and support chats. This isn’t “the singularity” about to happen. For example, models like Stable Diffusion and MidJourney which are generative AI, producing high-quality image and video, are definite game changers but are outside the scope of this article.

For the insurance industry, ChatGPT is transformative in the area of customer support calls, both pre- and post-bind. It’s not going to underwrite any policies or set prices for you (if you want that, we have AI in production which CAN do that for you. Contact us for more information). If you are an insurer who feels constrained from scaling up because you are limited by support personnel or wish to do more with the personnel you have, then ChatGPT could be right for you. There it is again, that word… Scale.