Image credit: @ionut.stefan.92ish
TL;DR: No. No, it doesn’t.
Disclaimer: I would like to make it very clear from the start that I am neither an artificial intelligence (AI) researcher, nor do I possess much formal expertise in this field. I am writing this article, on the one hand, as a way of clarifying many of these concepts for myself, and on the other hand, to add to the voices cautioning us against anthropomorphizing AI, as well as against the pitfalls of blindly trusting it. But you should not take for granted the information presented here (as you should generally not take for granted pieces of information without having at least a faint idea of what the consensus is on the matter). Instead, treat this as a journey you’re undertaking with a friend. Feel free to make use of the resources listed throughout the article and to look up your own sources of information as well starting from what is introduced here.
So let’s explore together whether AI understands.
What is understanding?
Before discussing whether AI understands (or even whether you, me, or my cat understand anything), we need to have at least a working definition of this concept. In other words, we need to be able to come up with an answer to the question “what does it mean to understand?”
Now, you might already notice that understanding is one of those pesky concepts, such as attention, knowledge, or consciousness, that we all instinctively think we know, but that we cannot easily articulate. So what can we do in this case? Well, for me, the answer is always to check the relevant literature. Chances are there’s already something to start from in there.
Unsurprisingly, there is a lot of literature dedicated to defining understanding. Definitions vary in complexity. From the simple dictionary definition: “a mental grasp”, “the power of comprehending”, or “the power to make experience intelligible by applying concepts and categories” (according to the Merriam-Webster dictionary) to more complex ones (see Further reading). For the purposes of this article, we will adopt the definition of understanding as a psychological process which involves the following:
- manipulating mental models;
- having a certain degree of knowledge;
- noticing relations between concepts, ideas, and/or things;
- intention;
- lowered uncertainty as a result.
How does AI work?
Now that we have a working definition pinned down, we still need to have a brief chat about how AI works. The criteria in our definition make it clear that it’s not enough to inspect the output, we need to dig under the hood.
Large language models
Simply put, LLMs (such as the infamous ChatGPT) work by making use of statistical dependencies between words in a sentence and sentences in a text (if you’d like a more in-depth explanation of how that works, I strongly recommend that you have a look here). In essence, what these networks do is they “learn” that certain words are more likely to follow others by looking at incredibly large amounts of written text. That’s it.
Text-to-image generators
Personally, I find the way these algorithms (we’re talking here about the likes of Stable Diffusion) work a lot more interesting compared to LLMs (again, for a more comprehensive overview have a look here or here). In brief, there are two main steps. During the training phase, the network sees images in which noise is gradually added. What it basically learns here is how much noise needs to be subtracted from an image in order to get the original one. Additionally, the network also learns a high-dimensional mapping between input text and images. During testing, when we ask the algorithm to “draw” something for us, it performs the reverse of that process: it starts from random noise, then it gradually removes the “predicted noise” learned in the previous step until it arrives at an output.
Does AI understand?
Purely from a technological perspective, the development of these algorithms is certainly impressive on its own. But I think even a superficial understanding of the algorithms behind the hood, as outlined in the paragraphs above, is enough to show that they don’t have any kind of understanding whatsoever. All they do is store incredibly large amounts of statistical associations between certain types of data. The problem with that is quite neatly illustrated by the following joke:

Why does it matter that AI doesn’t truly understand?
It matters because currently there is a tendency to treat these models as if they actually understand. And because we know they have access to so much more information than any human could ever go through, we also assume that the information they give us is correct. But their purpose is not to produce factually accurate information, it’s to produce output that resembles the input. Pretty much like when you had to write an essay for your literature class and you ranted about the symbolism of a certain wall color simply because you knew that’s what the teacher was expecting based on their in-class examples. Yet, because of our assumptions and because of how coherent the output seems, we become less likely to check its validity. And depending on what we do with that information, we can become unwitting amplifiers of nonsense.
If you’d like to read more on this topic, I recommend you check out Emily M. Bender, a computational linguist at the University of Washington and expert on language models.
What did you think about this post? Let us know in the comments below.
And as always, don’t forget to follow us on Instagram, Mastodon or Facebook to stay up-to-date with our most recent posts.
You might also like:
Further reading
Yufik, Y. M., & Friston, K. (2016). Life and understanding: the origins of “understanding” in self-organizing nervous systems. Frontiers in Systems Neuroscience, 10, 98.
One thought on “Does AI Understand?”