Johns Hopkins Magazine - September 1996 Issue

Hearing: In Short

New hope for severe dyslexia?

How do we understand language? By means of dancing neurons in the brain, a network built on sound frequencies mapped in time. At least that's the way Xiaoqin Wang has found it works in marmosets, New World tropical monkeys with tufted ears, long tails, and a propensity for elaborate chatter...well, let's call that vocalizations.

"You cannot say they have speech," says Wang, a neuroscientist in the Department of Biomedical Engineering. "But some people would say they have their language. The main difference is quantity, their capacity to process many different sounds. We are able to process a lot more sounds."

Wang believes his work on monkey communication has implications for human speech. Indeed, he has already been part of recent groundbreaking research to fashion "glasses for the ears." Built on the latest neuroscientific findings as to how primates code and decode speechlike signals, this ear-training method helped dyslexic children improve their understanding of speech by two years, after only one month of training.

Decoding speech is a major feat, if you think about it. For one thing, though we tend to think of speech as consisting of separate words and syllables, in fact the sound streams, and the separations we "hear," are provided by the brain. That's why, when you hear someone speaking a language you do not know, it's near-impossible to tell even where one word ends and the next begins.

Streaming isn't the worst of it, either. It turns out that cortical neurons fire rather slowly--no faster than every one- tenth of a second--yet we (and monkeys) can understand sounds that change much faster, with important signals coming as little as a few milliseconds apart. For example, says Wang, "when you say /ba/ and /pa/, those two sounds differ only in the initial segment, the time between when you open your mouth and when the voice starts. Depending on how long that pause is--the difference is about 10 or 15 milliseconds--you hear /ba/ or /pa/." Furthermore, you hear it no matter whether the sound is spoken by a woman's voice, a man's voice, or a child's piping treble.

Amazing.

Wang tackled the puzzle in marmosets by anesthetizing the animals, then mapping neural firing in the auditory cortex while he played them recorded "twitter calls," one type of vocalization used in their social interactions. In the experiment, calls were varied. Some were voiced by different marmosets, in and outside the particular troop, while others were synthetic (the sound frequencies were normal, but not the timing of the "syllables").

All cell groups in the auditory cortex respond to sounds of a particular frequency. (See page 29.) So it's no surprise that Wang found neural units scattered all over the auditory cortex that would respond to parts of the call, for the length of time the call stayed centered around the unit's home frequency. That was true for the synthetic, wrongly timed calls, too. That is, the animal heard them.

Wang found certain subpopulations of neural units that responded selectively to the calls of particular individuals, however. Such a neural network would allow the marmoset to recognize a particular animal's voice. Other subpopulations responded selectively to authentic calls, not to the synthetic ones. This fact indicates that timing is crucial to making a vocalization--a twitter call as opposed to meaningless jabber. "That's the exciting thing," says Wang. "The system converts temporal parameters into spatial form."

His findings could well explain how human beings learn to distinguish /pa/ from /ba/, the better to say papa, patticake, and parameter--or, since the difference between /pa/ and /ba/ boils down to a hundredth-of-a-second pause, do not learn it. Spoken language is full of subtleties, an obstacle for anyone who cannot hear the difference.

That idea lies behind a recent collaboration among neuroscientists at Rutgers and at the University of California Medical School (where Wang was then a postdoc). Their mission: to create a therapy that would actually, physically build up neural networks to distinguish simple sounds.

The first seven patients were children with a condition called specific language impairment (SLI), one of the more severe dyslexias. All were at least two years behind in linguistic development. Paula Tallal, the Rutgers team leader, had pinpointed the trouble as an inability to hear "fast transition sounds," like the phonemes ba, pa, ka, ta, da, and ga.

Led by Michael Merzenich, the Californians tailored the therapy, which centered around "processed speech"--speech the way these children could understand it. Wang's role was developing software that would slow speech down, while also boosting the pesky transitions about 25 decibels. The New York Times said the result sounded like someone shouting underwater, but to the children it was a great relief. Suddenly, many mysteries became clear.

For four weeks, the children spent three and a half hours a day, five days a week, working one-on-one with the Rutgers researchers. Using tapes of processed speech, they were coached in listening, grammar, and following directions (a skill some of these children lacked, since they'd never understood any orders). At home, seven days a week, they played colorful video games using processed speech; these games were programmed to slowly speed up as each child progressed.

At the end of the month, when tested again on language comprehension, all the children had advanced two years. For the first time in their lives, the SLI children understood speech about as well as other kids their age. A second experiment, with control groups, had the same result.
--EH

The power of a cry

Mothers' lore is rich with anecdotes about nursing mothers who unexpectedly let-down their milk when they hear a strange infant cry or even hear a cat mewing. Nurse Barbara Van de Castle, an instructor at Hopkins's School of Nursing, decided the alleged reaction might benefit nursing mothers who use breast pumps at work. She made a 15-minute audiotape of an infant crying and gurgling, and asked a group of nursing mothers to listen to the tape while using a breast pump at their workplaces.

Listening to the tape helped women let-down their milk faster (on average in less than 30 seconds), complete pumping almost two minutes sooner, and produce slightly more milk, report Van de Castle and Susan Will, a clinical specialist at Baltimore's Sinai Hospital.

Breastfeeding involves the milk ejection reflex, explains Will. The baby's sucking stimulates the release of the hormones prolactin and oxytocin in the mother. Prolactin causes milk to be secreted within the breast, and oxytocin causes constriction of the milk-producing cells so that milk is released.

To aid this reflex, breastfeeding specialists recommend that women relax. But many workplaces lack privacy and, thus, make it difficult, if not impossible, to relax, says Van de Castle. Discouraged that they cannot produce enough milk while at work, many women give up trying. Van de Castle says she is looking for "whatever it takes" to help women who want to pump their milk while on the job.

For their six-week study, Van de Castle and Will recruited 61 new mothers, and gave each a breast pump. They asked 29 of the women to listen to the tape of baby sounds each time they pumped their breasts at work, starting the tape recorder just before using the pump. The remaining 31 women, who served as a control group, pumped their breasts at work but were not given the tape.
--MH

Notes of control

Were all music improvised, and no one cared about preserving or prescribing certain compositions, we'd have no need for musical notation. But according to Susan F. Weiss, who teaches music history at Peabody, people have wanted to write down music for a long time.

"We have examples of notation that date back to 1400 B.C., to the Hurrian cult in Mesopotamia," she says. "They used symbols written on tablets." These symbols indicate two parallel melodic lines, and use of intervals--sixths and thirds. "So this suggests that the Hurrians not only had a melody, but some sort of harmonic structure as well," says Weiss.

She recently worked with Baltimore's Walters Art Gallery on an exhibit of musical notation, called "Singing Along with Guido and Friends," that opened last month and will run through November. The Guido of the title is Guido of Arezzo, one of the principal European music theorists of the Middle Ages. He invented a four- line system, close to the five-line system used today, that permitted precise notation of musical pitch. This refined notation greatly speeded the process of teaching choristers to sing Gregorian chant.

The ancient Greeks had an elaborate system of notation, says Weiss, but the medieval church suppressed it: "We believe that the early church fathers were so frightened by the pagan aspect of the Greeks that they pretty much buried their system." For a long time, priests or cantors made do with mnemonic markings in their texts, called neumes--slanted pen strokes and squiggles that served as reminders. In manuscripts from the 9th and 10th centuries, first one, then a second staff line appears, usually drawn in different colors.

The church's desire to control music spurred more elaborate systems. Says Weiss, "During the Carolingian period they wanted to rid music of all the local impurities. Charlemagne sent his bishop out to find the 'pure' chant. They wanted to codify music so there would be an official version." The only way to control what was being sung, she says, "was to have it written down."

With the advent of poly- phony in the 12th century, "you now had to know the rhythm, because if you didn't, people wouldn't know when to come together." So meter signs were added by about the 13th century. "The 'C' that we think of as 4/4 time was actually half of a circle, which meant 'imperfect' time," she explains. "Imperfect time was 2/4."

Though the system familiar to modern musicians developed in the 17th and 18th centuries, Weiss says, that "you could probably read stuff from the 16th century without too much difficulty. The notes were printed in diamond shapes."

Did notation lock Western musicians into a certain way of making music? Says Weiss, "Notation represented a blueprint. It wasn't meant to show precisely the way a piece should sound in performance. I think if Mozart were still around and looking at one of his scores, he'd probably say, 'Well, that's the general idea.' Though when you get to the 19th century, composers like Mahler really wanted to have everything spelled out. So you get over-determined scores that practically tell when to go to the bathroom, leaving nothing to the imagination of the performer."
--DK

Written by Elise Hancock, Melissa Hendricks, and Dale Keiger.


Send EMail to Johns Hopkins Magazine

Return to table of contents.