You’ve probably heard this before, but context and nuance are crucial when interpreting scientific data and findings. We’ve touched on their importance in science here, but in today’s post, we would like to give you a concrete example for that.
The initial situation
This example is centered around the primary visual cortex and the inputs that this region receives from the auditory cortex. The story goes like this: some decades ago, scientists discovered that each of our senses, such as vision and hearing, projects, i.e. sends inputs to specific areas of the cortex, such as the primary visual cortex (V1) or the primary auditory cortex (A1). Each of these primary regions was believed to receive information from one modality and that modality only. In other words, your eyes had no business talking to the primary auditory cortex. And because these areas dealt with just the one modality, they were termed unimodal.
Of course, we don’t perceive the world one modality at a time. When we’re watching a movie, we don’t first process the speech, then see what’s actually happening. Instead, these senses combine to give us a unified experience of an engaging story. This implies that there must be some areas in the brain where the information from the primary cortices is put together. So scientists set to work looking for them. Areas such as the posterior and anterior association areas were then discovered and the natural name for them was multimodal (i.e. integrating information from multiple modalities).
So far, so good. Information followed a clear path from the eyes and ears and skin and so on to the respective primary unimodal areas, and from there to the multimodal areas. Those gave us an integrated picture of the world and all was neatly separated and made for pretty pictures in science textbooks.
Understanding the broader context
But alas, those pesky scientists just couldn’t let things be. They developed better methods and conducted new experiments and lo and behold… They found, for example, that when pairing a dot with a beep and asking people to press a button as soon as they see the dot, people will respond faster than when the dot is shown alone. And looking at EEG signals during this task, they found something that should not have been there: responses in the primary visual cortex to beeps. As you can imagine, this caused quite a stir. Especially because, at the time, there had already been studies claiming that there were no auditory inputs to the visual cortex.
Something must’ve gone wrong somewhere, most likely. But how to tell? Well, this is where context and nuance come into play. The idea that the primary visual cortex responded to auditory stimuli was not new. There were plenty of studies reporting this in non-human primates and there was a growing body of evidence of such responses in humans as well. This was physiological evidence.
What about the anatomy? As we’ve mentioned, some older reports had searched for intermodal anatomical connections between the primary cortices and hadn’t found anything. At the same time, the tract tracing methods weren’t necessarily at their peak performance. What’s more, indirect studies, such as those looking at animals raised in sensory deprivation conditions (for example, blind), showed that the primary visual cortex of these animals had begun responding to auditory and somatosensory stimuli. These connections couldn’t have materialized out of thin air, so this was interpreted as evidence that maybe the tract tracing studies could’ve missed something.
Finally, although to the naked eye it looks the same everywhere, the cortex is anything but. At the microscopic level, there is a lot of heterogeneity. Again, placing previous findings in context, scientists noticed that most of the previous research looking at inputs to the visual cortex focused on the center of this area. Yet one study examining projections of a higher-order area to V1 had actually found differences in the connections between this area and the center, respectively the periphery of V1.
Putting these pieces of information together, a group of scientists from France decided it was worth conducting another tract tracing experiments and looking for connections between visual and auditory areas. Of course, they found something (otherwise this article wouldn’t have been written). Over time, the existence of these connections was confirmed by other groups of scientists and a body of evidence was built around this idea.
And everything was well again. Sure, the pretty diagrams were a bit more complicated, but now we knew some more interesting stuff, such as the fact that the primary visual cortex is, in fact, multimodal. But then… I guess you see where this is going.
Expanding the context
Science doesn’t stand still. New discoveries are published virtually every day and many times, that leads us to re-examine previous information through new lenses. That doesn’t mean that the validity of the original data is changing, but the context in which we interpret this data is in constant expansion.
Coming back to our example, there is another direction of research we need to mention before moving on. Independent of auditory signals, the primary visual cortex also shows activity related to various behaviours or the state of the organism. For example, activities such as running or pupil dilation both make V1 neurons fire. However, while visual stimuli lead to complex responses in V1, these activities lead to simple, low-dimensional ones.
Why does this matter? Because it is the observation from above that determined scientists to ask: what if the “sound responses” we see in V1 are not directly to the sounds themselves, but to the movements and internal state changes that these sounds determine? In an article titled “Behavioral origin of sound-evoked activity in mouse visual cortex”, the authors present evidence supporting this hypothesis.
On the one hand, the activity evoked by sounds in V1 is also simple, low-dimensional, just like that evoked by other behavioural activities. And sectioning the connection between the primary auditory and visual cortices doesn’t affect the V1 sound-evoked activity at all. This latter point is in direct contrast with previous studies that showed severing these connections did affect how the visual cortex responded to sounds.
Yet, similar to how methodology played a role in explaining why initial tract tracing studies had not found any auditory inputs to V1, it might also help us explain the current controversy. Unlike in the older study, Bimbard and colleagues (the current authors) recorded from another cortical layer, waited a few days after severing the connections before performing their experiments, and recorded from awake, not anesthetized animals.
Of course, this doesn’t necessarily mean that V1 can’t still be multimodal. What we now need is to see a more direct comparison, for example between the different layers in awake animals, or between awake and anesthetized animals while recording from the same layer. Yet this is a conclusion that couldn’t had been reached without a more thorough examination of the context surrounding the research.
Why this matters to you
I know, most people couldn’t care less about whether the primary visual cortex is or isn’t multimodal. It’s just a somewhat interesting tidbit that has no bearing on their lives. But this story isn’t (just) about that. Had we read just through the headlines, it would’ve ended up as something along the lines of “scientists can’t make up their minds about whether the primary visual cortex is multimodal or not”. From here, I can easily see how someone could’ve misconstrued this further into a general idea that “they can’t even figure out this simple thing”, thus hinting that they absolutely can’t be trusted with more complex problems.
And this is applicable not just to V1, but to almost anything. Stripping information of its context is, naturally, faster. But it comes with a lot of disadvantages. It gives us a false idea that we know everything there is to know about that topic. It tends to trivialize the work and the expertise of specialists in that field. Constantly ignoring context can, over time, make us blind to the fact that there is more information to consider. And, more often than not, it tends to rile us up one way or another, because the little bits are presented in as catchy a manner as possible.
Of course, our resources are limited, so we cannot delve deep into everything. Objectively speaking, compared to the amount of knowledge that’s out there, our personal knowledge of most domains will always barely scratch the surface. So what’s there to do?
First of all, accept that there’s nothing wrong with this. It’s just the way things are, not only for you, but for everyone (side note here: be wary of people who claim to be experts in everything). Secondly, keep an open mind. Remember that science progresses rapidly and if you’re not an expert in that particular niche, it’s incredibly easy to blink and miss twenty steps in-between. And thirdly, if a headline riles you up, always look for the context of the story before sharing it widely.
What did you think about this post? Let us know in the comments below.
And as always, don’t forget to follow us on Instagram, Mastodon or Facebook to stay up-to-date with our most recent posts.
You might also like:
Further reading
Bimbard, C., Sit, T. P., Lebedeva, A., Reddy, C. B., Harris, K. D., & Carandini, M. (2023). Behavioral origin of sound-evoked activity in mouse visual cortex. Nature Neuroscience, 1-8.
Falchier, A., Clavagnier, S., Barone, P., & Kennedy, H. (2002). Anatomical evidence of multimodal integration in primate striate cortex. Journal of Neuroscience, 22(13), 5749-5759.
Giard, M. H., & Peronnet, F. (1999). Auditory-visual integration during multimodal object recognition in humans: a behavioral and electrophysiological study. Journal of cognitive neuroscience, 11(5), 473-490.
Ibrahim, L. A., Mesik, L., Ji, X. Y., Fang, Q., Li, H. F., Li, Y. T., … & Tao, H. W. (2016). Cross-modality sharpening of visual cortical processing through layer-1-mediated inhibition and disinhibition. Neuron, 89(5), 1031-1045.
Iurilli, G., Ghezzi, D., Olcese, U., Lassi, G., Nazzaro, C., Tonini, R., … & Medini, P. (2012). Sound-driven synaptic inhibition in primary visual cortex. Neuron, 73(4), 814-828.
One thought on “Why Context Matters – A Case Study”