Does GPT-4 really understand what we’re saying?

Explore

A question for David Krakauer, President of the Sante Fe Institute for Complexity Science, where he studies the evolution of intelligence and stupidity on Earth.

Photo courtesy of David Krakauer

Does GPT-4 really understand what we’re saying?

Yes and no,” is the answer. In my new article with computer scientist Melanie Mitchell, we polled AI researchers on the idea that large pre-trained language models like GPT-4 could understand speech. When they say that these models understand us or that they don’t, it is not clear that we agree on our concept of understanding. When Claude Shannon invented information theory, he made it very clear that the part of the information he was interested in was communication and not meaning: you can have two messages that are equally informative, with one having a lot of meaning and the other having none .

There is a kind of understanding that is just coordination. For example, I could say, “Can you pass me the spoon?” And you would say, “Here it is.” And I would say, “Well, you got me!” Because we coordinated. This is not the same as generative or constructive understanding, where I tell you, “I’m going to teach you some calculus, and you can apply that knowledge to a problem I haven’t told you about.” This goes beyond coordination. It’s like, Here’s the math – apply it to your life now.

So understanding, like information, has multiple meanings – more or less demanding. Do these language models align with us for a common meaning? Yes. Do they understand in this constructive sense? Probably not.

READ :  Besides Damd, Final Fight fans believe they've discovered another character from the series in Street Fighter 6

I would make a big difference between super functional and intelligent. Let me use this analogy: nobody would say that a car goes faster than a human. You would say that a car can move faster than a human on a flat surface. So it can perform a function more efficiently, but it doesn’t run faster than a human. One of the questions here is whether we use “intelligence” consistently. I don’t think we are.

They have to survive by convincing us they’re interesting to read.

For me, the other dimension to this is what I call the standard model of science, namely thrift. Unlike the machine learning model of science that is Megamony. Parsimony says: “I will try to explain as much as possible with as few resources as possible.” This means few parameters or a small number of laws or initial conditions. The root of thrift, by the way, is thrift. And these language models are exactly the opposite of that. They are massive action. It’s trained on a vast but very limited area: text. Most human understanding maps textual experience to somatosensory experience. So when I say to you, “This is painful,” don’t associate that with another word. They map that to a sensation. Most human insights, through tags or words, convey experiences that have an emotional or physical aspect. And GPT-4 only finds cross-word correlations.

Another very important difference dimension is that our cognitive apparatus has evolved. The environment that created the qualities that we call intelligence and understanding is very rich. Now look at these algorithms. What is their selective context? Us. We are cultural selection on algorithms. You don’t have to survive in the world. They have to survive by convincing us they’re interesting to read. This evolutionary process that took place in training the algorithm is so radically different from what it means to survive in the physical world – and that’s another clue. To reach even remotely plausible levels of human competence, the training set used to present these algorithms exceeds what a human would ever experience in a nearly infinite number of lifetimes. So we know we can’t do the same thing. It’s just not possible. These things are Babel algorithms. They live in the land of Borges’ Library of Babel. You have the complete experience. You have access to all knowledge in the library. We are not.

READ :  16 ways Android 14 will subtly improve your phone

Aside from everything we cite in the paper, the other fact we need to emphasize—brittle mistakes, mistakes that the language models make and that are telltale signs we would never make—is that humans engage in mechanical reasoning apply things. If I said to you, “There’s a tram rolling down the hill and there’s a cat on its way. What happens next?” You’re not just trying to guess the next word, like GPT-4. You’re forming in your mind’s eye a little mental, physical model of that reality. And you’re going to turn to me and say, “Well, is it a smooth surface? Is there a lot of friction? What kind of wheels does it have?” Your language sits on a physics model. So you argue through this narrative. Not this thing. It has no physics model.

Well, the interesting point is perhaps in all this wealth of data, if we were inventive enough we could find a physical model. It’s tacit, implicit in its vast speech database, but doesn’t access it. You could say, “What physical model is behind the decision you are making now?” And it would now confabulate. So the narrative that says we’ve rediscovered human thought is so misguided in so many ways. Just proven wrong. This can’t be the right way.

Mission statement: studiostoks / Shutterstock

Brian Gallagher

Get the Nautilus newsletter

The latest and most popular articles straight to your inbox!

View / add comments