Are LLMs Really Intelligent?

Mithün Paul, PhD
7 min readMay 18, 2024

--

This is the second part of a talk I gave in April 2024 at the conference on consciousness. Here is the first part titled “Are Humans Really Intelligent?”

Let’s do some thought experiments. Imagine a little Toddler growing up. Imagine they are writing a daily journal from day one that they were born on earth. This is how it would look like:

Day 1: “Phew, that was a rough landing. I was really getting cozy in there when someone pulled me out. What the…And the worst part, now I need to apparently breathe to keep myself awake... Alright gtg/ttyl… have to cry”.

Day 25: “Today I noticed this person called “Mom”. She is sooooo coool. She gives me yummy yummy milk food which helps me sleep and grow. Also whenever “Mom” comes next to me, and hugs me, I feel warm and happy and joy. “Mom” today told me this feeling is called “Love” and she loves me a lot.

Day 50: “I am slowly noticing this other person who is also around. His name is dad. He doesn’t give me milk, but he also holds me in his arms a lot. With him also, especially when he hugs me, I get this warm feeling, that same love thing. In fact, today I heard dad telling me he also loves me sooooooo much. I am really digging this love thing. All I want now is milk, and love.. Rather now I go in a cycle of : Milk, Play, Love, Sleep, Rinse, Repeat.”

Day 100: “Today I met uncle, mom’s brother. He is cool too. He told me so many things about this place called World where he lives. I can’t wait to see it all. He has promised me he will guide me and take me slowly to meet this World place. Oh by the way he told me today, not only do mom and dad love me, but dad and mom they love each other too…Wow…isn’t that cool? Looks like love is everywhere in this family.”

So slowly as the child grows up, they are forming mental models of Mom, dad and love. And finally when their uncle tells them that “Dad Loves Mom”, the child is able to understand what it means. How? Now think about it. How did the child know that given the meaning of the word, Mom, Dad and Love, how were they able to automatically acquire, understand assimilate the meaning of the the new sentence Dad Loves Mom?

This ability is what greats like Noam Chomsky calls: innateness of humans to acquire language. Humans are born with the ability to acquire meanings, create a mental model for each of these meanings, put them together recursively to form bigger mental models by themselves, when they hear permutations and combinations of these initial meanings. Or in other words, humans are born with an innate ability for meanings and grammar (technical terms being semantics and syntax respectively), and recursively building new mental models out of them.

Now let’s compare the above thought experiment with another completely orthogonal one: A patient waking up from anasthesia after a major surgery.

If you yourself has been through this, or if you have seen someone coming out of Anasthesia, you know that the first few moments of “consciousness” is completely blurry and vague. You have no idea where you are, or who you are, left alone understand words or meanings. Slowly, slowly you get a sense of the world around you, you are able to hear the nurses calling your name, and as time passes, you are not only able to understand what they are saying, but also reply back to them in coherent human natural language. Or in other words, you did not switch from no consciousness to yes consciousness immediately. You had a hierarchical process of waking up into consciousness, an evidence of which is your ability to understand and speak the natural language.

Now think about this for a second: isn’t this exactly like what a child goes through since they are born? Maybe at a smaller scale? That both the new born child and patient waking up from anesthesia, goes through a hierarchical process of language acquisition. Is this an eerie coincidence or is it just tip of an iceberg?

Now if you go deeper into the details of OrchOR theory of consciousness we had mentioned earlier, you can see that most of Stuart’s experiments were done on patients waking up from anesthesia (he is an anesthesiologist himself). In fact if you want a quick summary of that theory it is this: Consciousness happens at a level deeper inside the neural network called MicroTubules. Consciousness is a Quantum Process. Consciousness process is hierarchical. i.e a patient doesn’t wake up immediately into consciousness, but there is a hierarchical process to it. Or in other words, the wave of consciousness doesn’t immediately collapse into the person’s body, but slowly happens hierarchically.

So the questions I want to ask here is: So in the process of waking up (In a toddler and in a patient waking up from anesthesia) when does meaning(and in turn language) acquisition phase start? Is it after complete collapse of consciousness or during? Is acquiring language proof of consciousness? Or most importantly, if a machine says it can understand human natural language does it mean it is conscious?

Just to satiate my curiosity, I decided to ask these questions to the mother of all Artificial Intelligence (as of April 2024 at-least), a Large Language Model what we call now ChatGPT. Here are its answers:

We already segued into our main topic: Large Language Models and Intelligence. As of writing this, Large Language Models have been exhibiting what is called Emergent Abilities: some sign of intelligence, and in turn language acquisition. LLMs are able to understand several complex tasks and questions a human asks and reply correctly. In this picture you can see that as the size of the model increases, more and more of these properties keep emerging.

A trend you can notice here is that, the “things that language models can do” is directly proportional to the size of the neural network. i.e., the bigger the size of the neural network and transformer based model is (in terms of the number of parameters, amount of data it needs to see, not to mention the time money and energy required to train it), more and more of these “intelligent” and “language acquisition” based properties keep emerging.

However, the question I want to ask is: is meaning and language understanding meant to be just an EMERGENT PROPERTY after the machine sees say 3 Trillion+ sentences — or is it a skill that is meant to be fundamental in an AI system?

Most importantly what is the definition of AI currently: is it meant to be a copy of human intelligence (a.k.a Artificial Intelligence) or an Augmentation to Human Intelligence. If it is meant to be an Augmentation to Human Intelligence, definitely it is doing a great job. However, is it truly a copy of human intelligence? Rather does a human child need to see 3 Trillion examples before understanding “Mom Loves Dad”?

I will leave you there with that thought.

Here is the third part of this talk titled: are we at the brink of another AI winter.

--

--

Mithün Paul, PhD

Research Scientist, Artificial Intelligence, NLP, Quantum AI, Quantum Consciousness