In modern language models, GPT-4o and Gemini 1.5 Prois touted as “multimodal” – able to understand images and audio as well as text – but new research shows that this isn’t actually the case. look In the way you’d expect them to see. In fact, they might not see at all.
Let’s be clear up front: no one is claiming that this AI can see like a human can (well, maybe some people do). But the marketing and benchmarks used to promote these models use phrases like “visual capabilities” and “visual understanding” to describe how the models see and analyze images and videos so they can do anything from solve homework problems to watch a game.
So while these companies’ claims are cleverly worded, it’s clear that they want to express what their models see in some sense of the word. And they do, but in a way similar to solving math or writing a story: by matching patterns in input data with patterns in training data. This causes their models to fail in the same way that they would fail at a seemingly trivial task like picking random numbers.
It is a somewhat informal but systematic study. Visual understanding of current AI models The study was carried out by researchers from Auburn University and the University of Alberta. They tested the largest multimodal model on a series of very simple vision tasks, such as determining whether two shapes overlap, how many pentagons are in a picture, and which letters in a word are circled.The overview micropage can be viewed here.
These are the sorts of questions that even a first-grader could get right, but they posed significant challenges for the AI model.
“Our seven tasks are extremely simple and humans can perform them with 100% accuracy. We expect AI to be able to do the same, but currently this is not the case,” the co-authors wrote. An Nguyen “Our message is: ‘Look, these best models still fail,'” he said in an email to TechCrunch.
The overlapping shapes test is one of the simplest visual reasoning tasks possible. When presented with two circles that overlap slightly, touch slightly, or are at some distance apart, the models consistently failed to get it right. Indeed, GPT-4o was correct over 95% of the time when the two circles were far apart, but was only 18% correct when the distance was zero or small. Gemini Pro 1.5 performed best, but still only got a 7/10 at close distances.
(The figures are not intended to show the exact performance of the models, but rather to illustrate the discrepancy between models across conditions. Statistics for each model are provided in the paper.)
Or how about counting the number of connected circles in an image? I think any above average horse could do this.
With 5 rings, they all get it right 100% of the time, but adding one more ring totally messes up the results. Gemini gets lost and doesn’t get it right even once. Sonnet-3.5 gets it right 6 times… 1/3 of the time, and GPT-4o gets it right just under half the time. Adding one more ring makes it even harder, but adding one more makes it easier for some people.
The point of this experiment is to show that whatever these models do, it doesn’t actually match what we see — after all, even if they look bad, we wouldn’t expect images with 6, 7, 8, or 9 rings to be all that different in success.
Similar patterns emerged in other tests: It wasn’t that people had better or worse eyesight or reasoning abilities, but rather that there seemed to be other reasons why they could count in some cases but not in others.
Of course, one possible answer is right in front of us: why are they so good at correctly recognizing images of five circles, but fail miserably when it comes to the rest, and five pentagons? (To be fair, Sonnet-3.5 did pretty well in this regard.) Because the training data for all of the computers prominently contains images of five circles: the Olympic rings.
Not only is this logo repeated many times in the training data, it’s likely explained at length in the alt text, usage guidelines, and articles about it. But where in the training data are there six connected rings? Or seven? As the answers show, nowhere! They have no idea what they’re “looking at” and no real visual understanding of rings, overlap, or what these concepts are.
I asked what he thought about the accusations that researchers make about models being “blind.” Like other terms we use, this “blindness” has an anthropomorphic quality that, while not precise, we cannot do without.
“I agree. There are many human definitions of ‘blindness,’ and we don’t yet have a word to describe this kind of blindness/insensitivity in an AI to the images we’re showing it,” Nguyen wrote. “Currently, we don’t have the technology to accurately visualize what the model sees, and its behavior is a complex function of input text prompts, input images, and billions of weights.”
He speculated that the model is not completely blind, and that the visual information it extracts from the images is approximate and abstract, such as “there is a circle on the left side.” However, the model has no way of making visual judgments, so it responds like someone who has information about the image but cannot actually see it.
As a final example, Nguyen sent the following example that supports the above hypothesis:
When the blue and green circles overlap (a question prompting the model to accept as fact), the light blue shaded area often results, like a Venn diagram. If someone asked you this question, you or any smart person would give the same answer, because if you closed your eyes, it’s totally plausible. But no one with their eyes closed would ever know. Open That’s how they would respond.
Does all this mean that these “vision” AI models are useless? Not at all. Their inability to make basic inferences about specific images speaks to their basic capabilities, but not their specific ones. Each of these models can be incredibly accurate about human behavior and facial expressions, as well as photos of everyday objects and situations, which is in fact what they are intended to interpret.
If we relied on the marketing of AI companies to tell us all that these models can do, we would be led to believe that their vision was 20/20. No matter how accurately the models tell us whether a person is sitting, walking, or running, we need studies like this to show that they do so without “seeing” (if we can say that) in the sense that we mean it.