So what’s all the fuss lately about having achieved sentient AI? Could that be true, given that the latest *extremely* large language models, have tens and even hundreds of billions of parameters, numbers rivalling, or even exceeding the human brain’s *mere* 86 billion neurons?
Well, turns out, that is not exactly the case. What is most likely confused with sentience, is the ability of large language models to imitate natural language, really really well. That is their whole purpose after all, modelling language. Their sheer size is what enables them to learn even the most nuanced details of the data they are given to learn from, which is just immense, and can fool even a senior Google engineer.
How do these large language models work?
LLMs are shown lots of natural language text examples, from which they learn various linguistic patterns. In the case of the “sentient” AI, the LLM was so good, it would recognise the context of the input prompt, find the most appropriate linguistic pattern within its knowledge base, and respond quite convincingly.
So if these large language models are not sentient, what else can we do with them?
We shouldn’t let our human tendency to anthropomorphise objects distract us from putting this amazing technology to good use.And what better use than health? Every day, hundreds of millions of patients talk about their experiences online. This information is so vast, it is hard to listen to, and provides them with the solutions they need. Sophisticated automated tools are able to cope with such volumes of data, and this is where LLMs come in.
LLMs’ exceptional ability to infer contextual information from natural language, makes them invaluable in extracting patient information. Let’s play a game to see if you can challenge an LLM. Spot the patient in the following texts:
A: I have had great results with Aceon, highly suggested!
B: Rayaquaza does not work for me at all, avoid!!
That’s a tough one, isn’t it? That’s because person A is talking about their blood pressure medication, while person B is talking about Pokemon! An LLM would have no difficulty discerning between the two, even if it did not know exactly what the two names referred to.
Recognising and classifying entities in text, disambiguating between words used in different contexts, as well as classifying text with regards to their author or sentiment, are some examples of all the natural language processing tasks that are primarily dependent on context. LLMs have a massive advantage over any other current method in performing these tasks.
Scaling up LLMs has not shown to have any diminishing returns yet. Larger language models are better and can do more things. They have become an indispensable tool to every computational linguist or data scientist’s arsenal. While sentient AI would satisfy all of our sci-fi dreams, for now, these advanced tools should be prioritised in solving real world problems.
Image from Shutterstock ID: 1936536418