As we look at the world around us, images flicker into
our brains like so many disparate pixels on
a computer screen that change every time our eyes move,
which is several times a second. Yet we don't
perceive the world as a constantly flashing computer
display.
Why not?
Neuroscientists at The Johns Hopkins University think
that part of the answer lies in a special
region of the brain's visual cortex that is in charge of
distinguishing between background and
foreground images. Writing in a recent issue of the journal
Neuron, the team demonstrates that nerve
cells in this region called V2 are able to "grab onto"
figure-ground information from visual images for
several seconds, even after the images themselves are
removed from our sight.
"Recent studies have hotly debated whether the visual
system uses a buffer to store image
information and, if so, the duration of that storage," said
Rudiger von der Heydt, a professor in Johns
Hopkins' Zanvyl Krieger
Mind/Brain Institute and co-author of the paper. "We
found that the answer
is, yes, the brain in fact stores the last image seen for
up to two seconds."
The image that the brain grabs and holds onto
momentarily is not detailed; it's more like a rough
sketch of the layout of objects in the scene, von der Heydt
said. This may elucidate, at least in part,
how the brain creates for us a stable visual world when the
information coming in through our eyes
changes at a rapid-fire pace, up to four times in a single
second.
The study was based on recordings of activity in nerve
cells in the V2 region of the brains of
macaques, whose visual systems closely resemble those of
humans. Located at the very back of the
brain, V2 is roughly the size of a watch strap.
The macaques were rewarded for watching a screen onto
which various images were presented
as the researchers recorded the response in the animals'
brain nerve cells. Previous experiments have
shown that the nerve cells in V2 code for elementary
features such as pieces of contour and patches
of color. What is characteristic of V2, though, is that it
codes these features with reference to
objects. A vertical line, for instance, is coded either as
the contour of an object on the left or as a
contour of an object on the right. In this study, the
researchers presented sequences of images
consisting of a briefly flashed square followed by a
vertical line. They then compared the nerve cells'
responses to the line when it was preceded by a square on
the left and when it was preceded by a
square on the right. The recordings revealed that the V2
cells remember the side on which the square
had been presented, meaning that the flashing square set up
a representation in the brain that
persisted even after the image of the square was
extinguished.
Von der Heydt said that discovering memory in this
region was quite a surprise because the
usual understanding is that neurons in the visual cortex
simply respond to visual stimulation but do not
have a memory of their own.
Though this research is only a small piece of the "how
people see and process images" puzzle,
it's important, von der Heydt said.
"We are trying to understand how the brain represents
the changing visual scene and knows
what is where at any given moment," he said. "How does it
delineate the contours of objects, and how
does it remember which contours belong to each object in a
stream of multiple images? These are
important and interesting questions whose answer may some
day have very practical implications. For
instance, how we function under conditions that strain our
ability to process all relevant information —
whether it be driving in city traffic, surveying a large
crowd to find someone, or something else — may
depend in large part on what kind of short-term memory our
visual system has."
Understanding how this brain function works is more
than just interesting. Because this study
shows how the strength and duration of the memory trace can
be directly measured, it may eventually
be possible to understand its mechanism and to identify
factors that can enhance or reduce this
important function. This could assist researchers in
unraveling the causes of — and perhaps even
identifying treatment for — disorders such as
attention deficit disorder and dyslexia.
Philip O'Herron of the Johns Hopkins School of
Medicine is a co-author on this study, which was
supported by grants from the National Institutes of
Health.