"It's just autocomplete"

Large language models aren't lookup tables. Their predictions are a product of sophisticated internal models of reality.

“It’s just autocomplete,” or some version of this claim, is frequently tossed around as a way to dismiss what large language models do. The claim isn’t entirely wrong, but the conclusions people draw from it usually are. A large language model is “just autocomplete” therefore: 

  • It doesn’t understand anything.
  • It cannot be creative/innovative/novel.
  • Is not “intelligent”.

I want to explain why the premise is right, but the arguments that typically flow from it are not.

Not just autocomplete

“Autocomplete” is a useful touchpoint for explaining how large language models work. But while next word prediction, or some variation, is the task they are trained on, “autocomplete” is a form of reductionism that ignores how language models are able to accomplish this task to well. The “just autocomplete” crowd envision language models as something akin to massive lookup tables that fill in the next word based on the probabilities in that table. But language models aren’t lookup tables. The way they achieve next word prediction is by creating sophisticated internal models of language, and by extension, reality. The next word a model predicts is the one most consistent with that model of reality. To understand how this works and the implications, let’s consider dogs.

"It doesn't know what a dog is"

A common rephrasing of the “just autocomplete” argument is something along the lines of “it knows that ‘dog’ is the next word in a sentence, but it doesn’t know what a dog is!” This is not quite correct. On the contrary, because language models base their predictions on internal models of reality it predicts the next word is ‘dog’ because it knows what a dog is. How does it do this?

LLMs represent words as high dimensional vectors called word embeddings (e.g. GPT-3 has over 12,000 dimensions to represent a word). The dimensions of these embeddings represent how the word relates to other words, concepts, objects etc. So the vector for “dog” contains the information that dogs have four legs, are animals but not the kind of animals that people generally eat, and can play basketball because there are no rules that say they can’t. When an LLM predicts the next word it’s not looking back into a database of text and picking the word that appeared most frequently in similar contexts. It’s modeling the meaning of a sentence, then looking at its internal model of reality captured in word embeddings and picking the one that makes most semantic sense according to its model. It picks ‘dog’ as the next word because it judges that ‘dog’ makes the most sense according to its semantic understanding of the text. But don’t take my word for it, here’s Geoffrey Hinton and Ilya Sutskever, arguably the two most influential researchers in AI, making the exact same argument. 

"It's just satisfying statistical patterns"

I am, frankly, surprised to see so many empirical researchers dismiss the “intelligence” of these models because their parameters can be mathematically defined. This strikes me as almost a basic misunderstanding of the enterprise we’re engaged in. Why? Because mathematical models are formally expressed theories about the world. Thus, the embeddings, weights, and biases that constitute a large language model are a system of theories about how things in the world operate and relate to each other. This is what is meant by “internal model of the world.” Is a LLM’s internal model of a dog accurate across all dimensions? No, but neither is yours. Why do your internal models constitute understanding but those of a LLM do not? If it’s because you can apply your internal models to novel situations then I have some bad news for that line of argument. 

“Language models can’t be creative, innovative, or tackle novel problems”

This claim can be empirically tested and has been shown not be true (e.g. Bubeck et. al. 2023, Webb et. al. 2022). How they achieve this might be puzzling if you conceptualize language models as autocomplete. It is less puzzling if you consider embeddings as internal models of reality. If a language model is predicting the next word based on what is consistent with some internal model of reality, it is not necessarily bound to sequences of text it has seen before. If that internal model is sufficiently sophisticated it can generalize to novel sequences. This can entail reasoning through abstract math or programming problems, drawing unicorns with programming languages, (Bubeck et. al. 2023) or generating short stories about ants sinking aircraft carriers (Piantadosi 2023).

Another way to rephrase this claim is “language models can’t predict out of sample sequences.” Phrased as such, this should be obviously wrong to anyone with a bit of machine learning under their belt. Out of sample prediction is mundane in machine learning. It’s unclear to me why so many have decided that LLMs are incapable of this. 

Useful, even if not "intelligent"

My impression, however, is that most of the LLM skeptics aren’t particularly interested in these details. They just don’t like the technology for one reason or another and “they don’t meet some arbitrary definition of intelligent” is as good a reason as any to dismiss them. That’s fine. There are many valid definitions of intelligence these models don’t meet. That doesn’t prevent them from being fantastically useful though, and most would benefit from worrying less about which philosophical hurdles they clear on the road to general intelligence and thinking more about how they can be useful in their own work.