V1


(Click for big)

The primary visual cortex is normally understood as being a direct map from the retinal image onto the brain. Apparently we were wrong.

From Nature Neuroscience: Perceived size matters

Using retinotopic mapping to delineate primary visual cortex, Murray and colleagues examined whether the size of activation patterns in V1 differed when subjects looked at either the front or back spheres. Remarkably, when the sphere that subjects were looking at was perceived to be bigger (due to the contextual cues), activity in V1 spread over a larger area than when it was perceived to be smaller, even though the size of the retinal image produced by the spheres was identical. Activity at the earliest stages of cortical processing does not therefore simply reflect the pattern of light falling on the retina. Somehow the complex three-dimensional cues present in the scene can be integrated to take into account perceived depth in the representation present in V1.

There has been work suggesting as much before, but this provides clear evidence. The article goes on:

This work is not the first to show that V1 activity can be strongly linked to conscious perception rather than to physical (retinal) stimulation. It is also clear that neural processing in V1 reflects not just feed-forward signals but also feedback influences from higher areas. However, this work not only provides a particularly clear and compelling example of these properties but also, for the first time, clearly links the spatial extent of what we perceive (rather than, for example, contrast or direction of motion) to the spatial extent of activity in V1. More fundamentally, these findings force us to re-evaluate the notion of a ‘hard-wired’ retinotopy in V1. The finding that V1 contains a topographic map of the retinal projection of the visual field has been central to visual neuroscience. Instead it now seems that the topographic map in V1 can be modified dynamically according to the perceived size of an object. This has important implications not only for understanding the role of V1 in visual processing but also in practical terms. For instance, it has become common practice in functional MRI studies focusing on early visual areas to functionally localize spatially delimited regions of interest using retinotopic mapping. The general usefulness of this approach notwithstanding, future studies will have to take into account the possibility that visual context can dynamically modify this retinotopy, even in early visual areas.

Neat stuff.

2 Comments

  1. Hey, I thought you hated vision and could do without it. But this is cool, and I’ll have to go read the paper now.
    I don’t understand how this is clear evidence though. Couldn’t the V1 output from the spheres (circles) themselves couldn’t be identical, seeing as activity in V1 will change with the image’s background. Looking at the bottom sphere is obviously not the same (2-dimensionally) as looking at the top sphere.

  2. I take it the old model looked like
    ::(retina)[field%, wavelength, intensity, motion]->(encoding to postocular neurons)[activity spike trains]->(V1)[proportional field%, etc. tags] (possible feedback loop?)->[higher processing, spatial modelling etc.]->(V1)(Vn)[…]->…
    and the new model looks like
    ::(retina)[as above]->(encoding)[spike trains]->(?)[processing module]->(V1)[disproportional field %, wavelength, intensity, motion]=[spatial modeling]…->…
    or am I missing a bit? Anyhow, this would fit with the idea that the eye functions accrete over time (take that, crazy creationists): the processing necessary for getting about is prior even to the main visual process, i.e. the first thing that happens is not that the visualfield is represented and interpreted, rather it is encoded and never represented _as_ visual field but just interpreted straight off. Thus the impossibility of imagining what you’re looking at in two dimensions (try it!), since though television provides a convenient model of what the humunculus was _supposed_ to be looking at the actual attribution to oneself of an interior function like the one imagined fails… It makes further sense that we ought to have expected this: it’s more efficient to not need a layer of processing bureaucracy. This also explains some of the visual illusions: if you had the visual field represented _exactly_ in the head you’d think there would have developed a system for mechanically comparing different regions of the visual field–preventing us from falling into the >– length illusion, or various color illusions (see previous posts on this blog): for that’s what we would _need_ in order to tell, for instance, how quickly and at what angle the sabertooth tiger is approaching from.

    *sigh* Hindsight.

    Cf. also here a comment that I was thinking up but can’t articulate regarding “concepts”, McDowell, and the deep hardware of the visual processing system.

Submit a comment