Regardless of whether the brain is considered from the perspective of a single brain cell and its surroundings, or from that of the different regions of the brain and the roles they perform, the center of human consciousness is an incredibly complex structure.
Ken Johnson, director of the Krieger Mind/Brain Institute, compares the challenge of understanding how such an intricate system works to that of understanding a computer by taking it apart. Progress in understanding the computer would have to advance on two fronts simultaneously, both in understanding the activity of the microelectronic transistors that are the most basic component of the computer and in ascertaining what larger structures like the central processing unit do for the computer.
"You could produce a flow diagram of the major components of a computer and if you didn't understand how transistors and digital logic work, you might as well be looking at the flow diagram of a corporation," Johnson says.
For almost four decades, modern research into the brain has similarly been advancing along two fronts. Most researchers either study the activity of a single brain cell or groups of brain cells, or they identify and characterize the functions of larger regions in the brain.
Neuroscientists have long yearned to push these two brain frontiers together--to use knowledge of the brain cell to better understand regions of the brain and vice-versa. Researchers from two different Johns Hopkins departments recently struck up a collaboration to try to do just that.
Steve Yantis, a professor in the Department of Psychological and Brain Sciences in the Krieger School of Arts and Sciences, and Ed Connor, an assistant professor of neuroscience in the Krieger Mind/Brain Institute, both study how the brain recognizes shape. Yantis, though, has been studying different regions of the brain involved in recognition, while Connor is probing these same processes at the level of individual brain cells.
Although recognizing a shape may at first seem like a simple thing to do, Connor points out that orientation, proximity and other factors can make the recognition process very complex.
"Any given object can produce a virtual infinity of views at the retinal level depending upon position, orientation, illumination or occlusion," Connor says. "All of these factors can change the appearance of an object."
Through experiments using surgically implanted electrodes to directly measure the activity of key brain cells in animals, Connor has produced findings that support the idea that the brain uses a lexicon of parts to represent and recognize shapes. For example, he's been able to show that a particular set of brain cells is more likely to become active when a test subject observes a convex segment of a border rather than a concave segment.
"Shapes seem to be recognized in terms of parts of their boundaries," Connor explains. "The number of different objects that we have to represent in the brain and store in memory is virtually infinite. And you have to do that with a large but finite number of neurons, so a parts-based approach makes sense. Represent something in terms of its parts, and you've got a really high number of combinations, just as letters of the alphabet encode thousands of words."
The convexity-activated cells Connor identified are located in a part of the brain with four distinct regions associated with the processing of visual input. These regions are known as V1, V2, V3 and V4, and Connor's cells are found in V4. All four regions are located near the back of the brain and contain separate maps of visual space.
"Visual information flows from the eyes to V1, and then toward higher processing areas like V4," Connor says. "We think the bias toward convex parts in V4 reflects the greater importance of convex parts on real objects. For example, the heads and appendages of animals are convex. Concavities are often just junctions between such parts."
Yantis has conducted research using functional magnetic resonance imaging, or fMRI, to learn more about what these regions do. fMRI lets scientists monitor the flow of blood to various areas of the brain in real time while test subjects perform different mental tasks. Areas receiving greater blood flow, scientists reason, will contain cells most active in the task currently under way.
Yantis and other researchers have been able to show that as visual input moves from V1 to higher areas, increasingly complex properties of the object are analyzed and encoded.
"V1 Š represents very simple properties like orientation, color and direction of motion," Yantis says. "And as you move up into higher areas like V4, you find cells that respond to corners and to convex parts. As you move even farther up, the representations become even more sophisticated and more complex."
Each technique--directly recording individual brain cells and studying large regions of the brain with fMRI--has its relative advantages. fMRI can be used on humans to study regions of the brain and the way they interact to perform tasks, but it can't tell scientists anything about the individual activity of brain cells. The brain cell- monitoring experiment, known as a neurophysiology experiment, does allow monitoring of individual nerve cells but can't be used on humans except in very special circumstances (which usually have to do with presurgical monitoring of the brain). It is also usually restricted to a small region of the brain. When the techniques are used in concert, Connor asserts, the potential for new findings is even greater.
"The neurophysiology makes predictions we can test with fMRI, and the fMRI makes predictions we can test with neurophysiology," Connor says.
Yantis and Connor's new collaborative venture began with a series of fMRI studies of human brains. Although region V4 is well-defined in animal models, only half of V4 (the half representing the upper part of visual space) has been identified in humans. Yantis and Connor hope to determine whether the convexity bias observed in animals is also present in this part of human V4. In addition, if the bias exists, it may be possible to identify the full extent of human V4 using convexity responses as a marker.
Their long-term objective is to determine how various properties are "encoded" by brain cells.
"Any given neuron participates in the representation of a very large number of objects, and it's this complex relative activity in a large population of neurons that allows you to represent a very large population of possible objects," Yantis says. "The puzzle we're faced with is determining the syntax or the rules by which these simple units combine to provide this extremely rich representational medium."
A first round of testing has already been completed and a second round started. Subjects are initially shown a rotating wedge and an expanding ring. By correlating brain activity with the placement of the wedge or the ring in the subject's visual field, scientists can localize the maps of visual space corresponding to V1, V2, V3 and V4 in the subject's brain.
In later steps, subjects are asked to look at convex or concave corners of shapes, with the rest of the shape gradually grayed out as though the corner were under a spotlight and the rest of the object in shadow.
"That's the way we can show a stimulus that just has a convexity in it and nothing else, because we're leaving the rest of the object ambiguous," Connor explains.
"Previously, people have looked for differences in the way the brain responds to different categories of objects--faces, houses or chairs, for example," Yantis says, "and they found different patterns of activity. But there's actually some controversy about how best to think about those data. What we're trying to do is to set up the problem using simpler, targeted stimuli, so we can answer more theoretically driven questions like, What is the nature of the units of representation?"