William H. Calvin and George A. Ojemann's CONVERSATIONS WITH NEIL'S BRAIN (chapter 12)
Home Page || Public Bookmarks || Science Surf magazine || Table of Contents
Conversations with Neil’s Brain
The Neural Nature of Thought & Language
Copyright  1994 by William H. Calvin and George A. Ojemann.
     You may download this for personal reading but may not redistribute or archive without permission (exception: teachers should feel free to print out a chapter and photocopy it for students).
William H. Calvin, Ph.D., is a neurophysiologist on the faculty of the Department of Psychiatry and Behavioral Sciences, University of Washington.
      George A. Ojemann, M.D., is a neurosurgeon and neurophysiologist on the faculty of the Department of Neurological Surgery, University of Washington.
12
Acquiring and Reacquiring Language

Normal speech consists, in large part, of fragments, false starts, blends and other distortions of the underlying idealized forms. Nevertheless... what the child learns is the underlying [idealized form]. This is a remarkable fact. We must also bear in mind that the child constructs this [idealized form] without explicit instruction, that he acquires this knowledge at a time when he is not capable of complex intellectual achievements in many other domains, and that this achievement is relatively independent of intelligence....
      the linguist Noam Chomsky, 1969


STILL NO WORD of a cancellation on George’s surgical schedule, so Neil was biding his time, reading and taking advantage of the good weather while he could.
      One day he offered to pick me up in his boat, from the waterway in back of the medical school. I waited out at the end of the seaplane dock and swung aboard his sailboat as it drifted into the dock. So our conversations about language took place to the background music of sails luffing, punctuated by the occasional bong.
      “If you listen carefully, that aluminum mast sounds like a whole set of wind chimes,” Neil remarked.
      All I heard myself were muted gongs — all of the same monotonous note, not multiple ones. Undoubtedly, I told Neil, this was a matter of categorical perception — and he could hear more categories than I could. Remember, I asked him, telling your foreign language instructor — after she’d corrected your pronunciation — that you were pronouncing the word as she did?
      “Must have happened a dozen times.”
      The instructor could hear differences that you couldn’t. She had classification categories for sounds that you didn’t have. Newborn infants are also better at hearing subtle differences, compared with adults. Just like your language instructor, they can detect the slight differences between certain speech sounds that adults will insist are identical.
      Babies may not be able to tell you which pronunciation is correct, but they can tell you whether a sound has changed from its previous repetitions.
      “How does anyone know that? The babies can’t talk, after all.”
      The child psychologists have gotten very clever. Babies get bored when hearing the same sound over and over, but they’ll perk up if you introduce a little novelty. They’ll get quite good at detecting subtle shifts in the repeated syllable if you reward them with a brief glimpse of a dancing bear. When they hear the sound change, they’ll turn to look at where the bear will briefly appear — that’s how you tell they heard the sound alter. And so you have the speech synthesizer vary the sound timing a little, perhaps exploring the range between /pa/ and /ba/. Newborns seem to detect the sound changing — in other words, they perk up and watch expectantly for the bear’s appearance — when older children or adults are insisting that nothing changed.
      We adults hear the sound suddenly switch from /pa/ to /ba/ in the middle of the timing gradations — in other words, we create a dichotomy where none existed. Such categorical perception is often shaped by experience, and newborns are without much experience.
      “Don’t they hear sounds in the womb?”
      Yes, but only low frequencies — the higher frequencies are filtered out, just as they are from the sounds of your neighbor’s hi-fi. The walls only let through the low stuff, the boom-boom. And the fetus hears lots of pulsations from the mother’s heart, or gurgling from her gut, that may interfere with concentrating on external sounds.
      Our sound categories are formed from experience listening to parents and siblings and television audio. The baby literally tunes itself up to the peculiarities of the language that it hears. In particular, he learns to deal with the variations between speakers in how they pronounce those sounds by creating broad categories. In the process, the baby loses the ability to detect subtle differences in speech sounds (“phonemes”) that he could earlier perceive. He forms mental models of the phonemes and ignores any slight variations.
      “Is that why the Japanese have so much trouble in pronouncing R?”
      Having sound categories can create some problems when hearing a language that you weren’t raised with. In Japan, for example, babies learn a phoneme that is midway between /r/ and /l/. Forming such a category means that you learn to ignore variations around this phoneme. So, when exposed to an English /r/ or /l/, the Japanese tend to hear that in-between Japanese phoneme.
      “So they think that the two English phonemes are the same thing.”
      They are, after all, both captured by the same mental category. Most of us can’t hear the difference between similar Hindu or Portuguese phonemes either, thanks to our upbringing. If we can’t hear the difference, we can’t correct our own pronunciation. Eventually we become the somewhat defensive student who complains at the language instructor, unable to hear a sound difference that we could have detected as a newborn baby.
      After this tuneup period of infancy for the locally used speech sounds, various aspects of language develop.
      “Such as babbling, which is what my youngest is now doing.”
      But soon she’ll be building a basic vocabulary, and going on to the two- and three-word sentences in her second year. Then she’ll acquire syntax and fancier sentences in her third year. And develop a fascination with stories and other sequences, then learn to read.
      “So the language cortex is self-organizing around the natural categories of what it hears? Does it reorganize the same way, after damage?”
      Of course, sometimes the cortex can’t effectively reorganize, such as after those critical periods for using both eyes together. But clearly some cortical areas are very good at reorganizing, even in adults. There aren’t many studies from hearing or speech, but there are some wonderful reorganization stories from the sensory strip.

bk7p187.jpg 19.4 K
[FIGURE 58 Sensory strip maps before and after finger exercise]

THE HAND’S MAP IN THE SENSORY STRIP is not fixed but subject to considerable rearrangement. The boundaries between finger representations in the cortex move by millimeters on a timescale of days to weeks — and this is in an ordinary adult monkey who is merely getting a little exercise of one part of one finger for several weeks, touching a bumpy surface.
      “Something like a blind person `reading’ braille?”
      Exactly. Someone has even mapped the sensory strip in blind people and shown that their finger areas are larger than average. In monkeys, you can see how it happens.
      Even without such obvious exercise, the thumb-face boundary in monkeys moves by about a millimeter over a period of several weeks. Some neurons that were responding to a patch of skin on the face will stop responding to it — and begin responding to a patch of skin on the thumb. This back-and-forth, for no apparent reason, suggests a continuous dynamic retuning during the monkey’s life.
      “It sounds like a boundary dispute. Just like the province of Alsace on the French-German border, which has changed nationality four times since 1871.”
      And so has a composite character, just like those neurons that represent the thumb one week and the face the next.
      “Use it or lose it” may overstate the issue, but these studies of the sensory strip in adults certainly suggest that competition is alive and well in the cerebral cortex. Before that discovery, the adult primate brain was considered rather inflexible, with only young brains capable of such substantial rearrangements of function.
      “But how radical can this reassignment be?”
      It seems quite variable: more in juveniles than adults, more for skin sensation than for vision. And there are preferences: if an arm is lost, its space in the sensory strip seems to be taken over entirely by the lower part of the face, mostly the chin and jaw. The chest representation, on the other side of the sensory strip from the vacated area, doesn’t invade at all.
     

FOR LANGUAGE, there are no detailed studies of the kind possible in experimental animals, but there are many clues from how infants compensate for severe damage to their brains. Things can go wrong with the usual developmental sequence, and some of them teach us about the child’s changing brain organization for language.
      Certainly the most dramatic clue comes from a rare congenital malformation, called the Sturge-Weber syndrome, where abnormal blood vessels occur over one side of the brain during fetal development. In such arteriovenous malformations, much of the oxygenated arterial blood is shunted directly into the veins without ever traversing the capillaries. Thus the neurons are not properly nourished. The brain under these vessels develops severe seizures, is stunted in its growth, and becomes essentially useless.
      That’s bad enough. But the seizures spread to affect the other side of the brain as well, preventing it from getting on with development. The malformed blood vessels of one side of the brain essentially put both sides of the brain out of commission. For decades, neurosurgeons have treated babies born with this problem: they remove the abnormal blood vessels and the cerebral cortex it supplies, leaving the subcortical structures that get their blood supply from elsewhere.
      “And these poor kids get along on only half a brain? I sure didn’t, back during the Wada test.”
      Well, actually the baby has more than half a brain, as the cerebral cortex isn’t everything. These kids only have half as much cerebral cortex as is normal. Surprisingly, they grow up without being paralyzed on one side of the body, or blind in half of their visual world, as you might predict. Evidently the remaining half can run the same side of the body as well as the opposite side. Sometimes the malformation is on the right side of the brain, leaving the baby with only a left cerebral cortex.
      “So they can talk okay?”
      Yes. But sometimes it’s the left cerebral cortex that has to be removed, and that leaves the baby without the structures thought to underlie language. Now, if development of language were totally dependent on wiring in the left brain, language shouldn’t develop in such right-brain-only kids.
      Yet it does. When examined years after such an operation, children with left-hemisphere removals have useful language, although they are often considered to be rather quiet children. The right brain evidently can support language, even though it doesn’t normally.
      However, this language is not quite normal. Testing these children at about the age of ten, the psychologists Bruno Kohn and Maureen Dennis found that the children with left-brain removals had many more problems using complex grammatical constructions than children with right-brain removals, even though both groups seemed to be of similar intelligence.
      “Differ? How?”
      Children with left-brain removals tend to talk in the present tense, finding the future tense somewhat difficult. There appears to be some wired-in mechanism in the left brain that allows for the full expression of language, a mechanism the right brain cannot completely support. The linguist Noam Chomsky suggested that because all real human languages use a finite set of construction rules, out of the infinite set imaginable, there must be a biological basis for syntax and grammar. The remaining language in the Sturge-Weber children suggests that only the left brain has the full set of neural structures needed for language.
bk7p190.jpg 52.9 K
[FIGURE 59 Critical periods and the “winds of experience”]

      Although in infancy, the right brain could mostly compensate for loss of left-brain language mechanisms, adults are not so fortunate. A large left brain stroke usually results in a permanent loss of most language functions, just as in Broca’s patient, “Tan-tan.” The right-brain compensatory ability seems to be lost for most of us sometime in the preschool years.
      Most children suffering major left-brain injury before the age of two seem to develop useful language. Permanent language loss begins to appear when such injuries occur at age 6 or 7. Children suffering left brain injury between ages 4 and 6 are left with severe verbal learning deficits, even though they retain most of their previous language abilities.
      “Can George study kids in the operating room?”
      Not usually. While they sometimes have epilepsy that cannot otherwise be treated, the neurosurgical approach often used in adults, recording and mapping in the operating room, requires an awake, cooperative patient under only local anesthesia for part of the operation. That’s a bit much to ask of children or of adults with mental retardation. For them, a different technique is used, although it is considerably more expensive and somewhat riskier than the adult procedure. Under general anesthesia, a sheet of electrodes — called a “grid” — is implanted so that it rests atop the cortical surface, with the wire brought out through the skin incision. Mapping and recording in these children is then done during the following week.
      “That’s what George said I might require, if the round-the-clock EEG didn’t resolve the frontal- versus temporal-lobe issue.”
      Grids are quite wonderful, as the child’s language organization can be discovered during brief testing sessions in the days that follow implantation, when the child is awake and cooperative. The youngest child studied with such grid-based stimulation mapping during naming was four years old. The naming sites were nearly dime-sized, much as they are in adults. Multiple naming sites in the same lobe have not been identified before the age of eight.
      While the data from children are limited, they certainly raise some interesting questions. Might the development of localized naming sites correspond to language becoming “set in concrete,” an indication of the loss of an ability to shift it to the other hemisphere?
     
bk7p192.jpg 14.1 K
[FIGURE 60 Child’s rate of language acquisition ]

HOW CHILDREN LEARN LANGUAGE is a favorite subject of many a parent and teacher. Many “stages” (albeit, overlapping) of language development have been postulated by both linguists and developmental psychologists.
      “My daughter’s still babbling. But we’re trying to teach her Mama and Dada.”
      Parents and teachers assume that children would never learn language without their constant assistance. But it is fair to say that language is not taught so much as it is acquired willy-nilly, that children would probably learn language even without prompting and correcting.
      “That’ll surprise a lot of parents.”
      While children would surely learn language more slowly without help, most children would discover the word meanings for themselves in the preschool years. What is so important is hearing the sentences, seeing what people do in response to them, and then learning to influence the people around you by producing word strings yourself.
      Unlike the apes who have acquired useful vocabularies of a few hundred words, the preschool child is enormously acquisitive of words, adding a half dozen new words to its growing vocabulary every day. Some are simply acquired through observation, not the pointing-and-naming route; parents are sometimes surprised at a few words that their child picked up by listening in the months before beginning to talk.
      And acquisition by trial-and-error observation is particularly true for syntax — what’s called grammar in popular usage. They’re the rules that we use to interpret strings of words. Did you teach your older children syntax?
      “Are you kidding? I barely know the rules. I couldn’t possibly have taught them. I only know when something is wrong, and try to point out the correct usage.”
      You do know the rules — you demonstrate that with every sentence you speak. You just can’t articulate them. And even linguists have trouble explaining them. So how did all of us ever learn English syntax?
      Children learn sentence construction rules by mere observation. Between about 18 and 36 months of age, children seem especially acquisitive of the structural rules underlying the sentences that they hear spoken around them. They may not be able to describe the parts of speech, or diagram a sentence, any better than their parents — but they act as if such knowledge was becoming embedded in their brains.
      This biological tendency is so strong that children can even invent a new language. Deaf playmates have been known to invent their own sign language (“home sign”). The linguist Derek Bickerton has shown that children can invent new languages out of the pidgin protolanguages that they hear their parents speaking. Pidgins are shared vocabulary used by traders, tourists, and “guest workers” (and, in the old days, slaves) — the words are usually accompanied by much gesture, as when I attempt some tourist Greek. It takes a long time to say a little, because of all those circumlocutions. A proper language allows you to pack a lot of meaning into a short sentence.
      Creoles are proper languages with their own syntax, capable of quickly conveying models of “Who did what to whom with which means” from one mind to another. The children of pidgin speakers seem to take the shared vocabulary they hear and create a syntax for it, although not necessarily the syntax from their parents’ native language. They invent, willy-nilly, a proper language. That’s the best evidence that the child’s brain is really predisposed to syntax.
      Compared to pidgins, a proper language can convey such complicated concepts using relatively few words. That’s because they have elaborate rules for interrelating the words to achieve additional levels of meaning.
      “Syntax, I gather, is what separates a protolanguage from real language. It’s what allows you to understand more than just the meaning of the individual words, standing alone?”
      Not quite. The real test of language is constructing sentences using the rules, not understanding them. It’s much easier to understand a fancy construction of someone else’s, simply because you can guess so well. When I was a visiting professor in Jerusalem, knowing hardly any Hebrew — maybe a hundred words of vocabulary that I could use with shopkeepers — I was at a faculty party when some long and heated exchanges took place, all in Hebrew. Noticing that I was concentrating on the discussion, a woman across the room abruptly asked me — fortunately in English — if I understood what they were saying. This brought the discussion to a complete stop and everyone turned around to look at me. I replied that I understood some of it. Well, she then asked me — in front of all those people — to describe what they’d been talking about.
      “That sure put you on the spot. So what happened?”
      I briefly said that they’d been discussing the peace treaty with Egypt and the loss of the settlements near the Gaza Strip, the loss of the Sinai Desert air bases, the political problems of relocating people. And it turned out I was right! Despite being unable to speak anything fancier than three-word sentences using my hundred-word Hebrew vocabulary, I’d guessed my way through complicated sentences to a general understanding of the topic.
      Speaking a novel sentence yourself, using the rules, is what’s so hard. Yet children find it easy to pick up new sets of rules — they can speak second languages easily, once they’ve learned one. Unlike their parents.
      Of course, some children never learn a first language’s syntax.
      “How’s that? You just said they pick it up, even without anyone teaching them.”
      If they’re deaf, they don’t learn the words or the rules by listening.
     

CRITICAL PERIODS IN LANGUAGE DEVELOPMENT are a serious matter for the one child in a thousand who is deaf (or nearly so) from birth — at least for those not exposed to a conventional sign language. Most hearing children are speaking single words by 12 months, form simple two-word sentences between 18 and 24 months, and start getting the word endings for past-present-future and for singular-plural between 30 and 42 months.
      Deaf children of hearing parents are doing little of this. In the United States, they aren’t even identified as hearing impaired until they are nearly three years old, on average. Meaning, of course, that many are identified even later.
      “How’s that possible? Surely the parents noticed something wrong before that.”
      Indeed, they were probably worried about why the baby was a slow learner. But never thought to stand behind the baby, out of sight, and make a loud noise that ought to startle him — if he had normal hearing. Lots of kids never get taken in for well-baby checkups. These days, there are cheap and effective tests you can perform shortly after birth that will detect deafness, before the infant ever goes home from the hospital. They aren’t used very widely, but they ought to be — because undetected deafness spells big trouble, thanks to critical periods for picking up the rules that govern sentence construction.
      I should start by saying that deaf children born to most deaf parents actually learn language as effortlessly as do most other children, and can readily learn nonsign languages when reaching school age. These deaf kids may have a social handicap in dealing with those who do not use their language, but they don’t have a language handicap as such.
      And the reason that infant deafness is so serious is also different from the issues of adult-onset deafness, where you’re mostly worrying about hearing aids or the paranoia that can develop from social isolation.
      If the parents are not fluent in sign language, the deaf child may be unable to acquire a syntax by gradual observation. And this interferes with language skills for life: you can’t “make up” for it later. A child not exposed to fluent sign language in the early preschool years may end up with a rudimentary vocabulary and little ability to construct or understand complicated sentences. Or to plan for tomorrow. Or to think before acting and predict why that course of action might distress other people.
      It is now clear that a child needs more than words (or signs), more than two- or three-word sentences: a child needs to discover a syntax within its everyday environment. And it needs this experience in its third year of life, not later in school. Learning the syntax of at least one language during its first 18 to 36 months seems to be important for learning another language later, such as lip-reading English. American Sign Language (ASL) has a syntax to discover, and Manually Coded English uses the syntax of English.
      “But half of those kids, you said, were discovered to be deaf after 36 months of age.”
      And so they’re really behind, in big need of a crash course. But the parents can’t usually provide it themselves, even if the deafness is discovered much earlier. They assume they can “stay ahead” of the child in learning sign-language vocabulary — and don’t realize that the real problem is syntax. Unless raised in a deaf community themselves, the parents seldom get good enough at sign language to use signing syntax, and so the deaf child can’t discover much syntax from watching them.
      “So those deaf kids really need to be in a deaf preschool, with fluent signing all about.”
      Exactly. But a few hours a day in such a preschool may be marginal, compared to what is experienced by the babies of deaf parents who are surrounded from birth by fluent ASL all day. They, however, are only 10 percent of all deaf children.
      “So what’s a parent to do?”
      I asked one of the experts what she’d do herself if she had a deaf child. And she said that, besides having the whole family learn ASL as quickly as possible, she’d supplement the deaf preschool by hiring a deaf baby sitter for the rest of the day. That certainly sounds like a more optimal strategy for the hearing parents of a deaf child, and it’s perhaps an essential strategy when the child’s deafness is discovered after much of the normal language-acquisition period has already passed.
      Time’s just too short after the discovery of the deafness, and so most parents don’t get their act together before the window closes. In many places, community resources don’t kick in until school age. So another child loses out on its proper human heritage, language, all because the community didn’t try to head off trouble with free well-baby clinics and outreach programs.
      “At least they’re better off than those `wolf children,’ the kids supposedly reared by wild animals.”
      They’re a particularly dramatic example of an abnormal upbringing, but they usually have multiple medical and social problems that interfere with an analysis of their language learning deficits. The deaf children of hearing parents are almost normal in comparison, having everything except language experience.
      There are some intermediate cases around, such as Genie. She’s the girl brought up by a mentally ill father who locked her in her room. Until she was discovered at age 13, the only human voices she encountered after the age of 18 months would have been heard through the walls of her room.
      Although said to be of normal intelligence, Genie has never developed normal language despite intensive therapy provided after her discovery and release from her secret prison. After passing through the pre-two-year-old stages of language acquisition, she remains stuck at a level of language exemplified by such utterances as “Applesauce buy store.”
      Most sensory systems do not develop properly unless there is exposure to appropriate sensations during a particular phase of development, what we call the “critical period.” Genie, those deaf children reared without syntax to discover, and the brain-damaged children — all suggest that there is a critical period for language in the preschool years. Without appropriate experience, then, language — in the beyond-the-apes sense — becomes impossible. Such language-deprived children will likely, like Genie, remain stuck at the level of protolanguage. For them, the “window of opportunity” may be past, and we can only try to prevent other children from suffering the same fate.
      This is not to say that intensive instruction may not be helpful to the many children who were not totally isolated from language in the preschool years. A child’s hearing loss may gradually develop, so the child has some syntax experience before becoming deaf. Developmental abnormalities that limit social interactions — such as the autistic children that talk very little — may give the appearance of lacking language, but such children may have gotten enough listening experience to acquire language during the critical period and thereby benefit from therapy later.
      “What about critical periods for the chimps that have been learning sign language? Is that why some chimps have failed to learn syntax?”
      It’s not really sign language, in the ASL sense, although some early studies were indeed done with ASL. But manual signing is so hard to teach apes that not enough words can be learned in the time available. What the experimenters now do is to use symbol boards with hundreds of arbitrary symbols, just like the ones used with retarded and autistic kids. The teachers point to the symbols as they talk aloud, and the apes — unable to talk very well — learn what symbols correspond to what objects, what foods, what actions, which people. And so they themselves point at a series of symbols to construct their own sentences.
      It’s much more natural, very much the way normal children acquire words: observing, learning what the symbol is good for, and then producing. The trouble with teaching apes manual sign languages like ASL is that it’s a lot of work learning to produce a sign, and that comes before they ever learn what it’s good for — which is not the way to motivate anyone, including apes. The symbol board tries to mimic the customary route by which babies work their way into the world of language: comprehension first, production later.
      So far, it looks like there is a critical period for this kind of protolanguage even in the bonobos — even for learning words. The two bonobos exposed to the symbol-board language after the age of three — the mother of the two particularly successful bonobos, and their half-sibling — have not been able to acquire either words or an understanding of syntax, despite lots of effort by the teachers. But two other offspring of the same mother — both Kanzi and Panbanesha started on language earlier than age 3 — have learned lots of words.
      “I saw Kanzi on TV, doing about as well at carrying out complicated instructions as a little two-year-old girl did. So you’ve got to get to them while they’re still young?”
      Right. Whether ape or human. There may be some facet of language that is innate, but “language is innate” tends to gloss over the fact that the capacity needs to be developed during a sensitive period of early childhood.
     

REACQUIRING LANGUAGE AFTER LOSING IT is, of course, the big problem that stroke patients may have. Although most adults with large areas of damage in the left brain will have permanent language deficits, those with smaller areas of injury may recover. They may initially have equally severe deficits but then go on to recover some or all language functions over a period of months to years.
      “So speech therapy works for stroke victims?”
      Although there is some evidence that this recovery is hastened by speech therapy, it may also occur without any therapy. How this recovery occurs is an important question, for it can give us some idea of the extent of “plasticity” of the adult brain. And maybe help us design a really effective speech therapy for aphasics.
      “So does the right brain start helping out, when the language areas of the left brain are damaged by a stroke?”
      Occasionally. In a few patients, the right brain seems to have a basic vocabulary, as shown by an ability to point at objects whose names they have heard. These abilities have been most clearly shown for some epileptics whose corpus callosum has been severed. This bundle of connections represents the major path for the cerebral cortex of the left brain to connect with the cerebral cortex of the right brain, and so cutting it can prevent the spread of left-sided seizures into right brain as well. In some (but by no means all) of these “split brain” patients the isolated right nondominant hemisphere seems to have some understanding of the meaning of simple nouns.
      The trouble is, this basic vocabulary of their right brains may represent rearrangement rather than intrinsic right-brain language abilities, just as in the Sturge-Weber children with left-hemisphere seizures, where some left-brain language functions emigrated to the right brain before the surgery that cut the connections. Most of the famous split-brain patients also had seizures from early childhood. In any event, human right-brain abilities seem minor on the scale of chimpanzee and bonobo linguistic achievements — unless language has been totally forced out of the left brain early in life.
      “Well, what about rearrangements within the left brain itself, after a stroke? Does that work like those sensory strip monkeys?”
      Although relatively few cases have been examined so far, it’s not looking like those rearranged finger maps — at least with the techniques suitable for use in the O.R. When patients who have recovered from strokes have electrical stimulation mapping for some reason — usually during surgery for the seizures caused by the stroke — their naming sites are found on the margins of the stroke damage. Yet those sites are within the territory where naming sites can be found in “normals.” Were there substantial reassignment, you would expect naming sites in unusual places. That hasn’t been seen.
      But perhaps there would be a better chance of seeing rearrangements while they are in the process of happening, rather than later, when stabilized. In a few patients with brain tumors near language areas, language has been mapped several times, at different stages in tumor enlargement. As with the epilepsy patients without tumors, the tumor patients have multiple well-localized naming sites when first mapped, at a time when language seems normal. As the tumor progresses and language begins to fail, a patient may undergo a second surgery to remove the resurgent tumor; at that operation, remapping shows that one of those naming sites has been altered.
      What happens is that the well-defined boundaries of the naming site seem to be replaced by a more diffuse area where only an occasional error in naming occurs during stimulation. The other naming sites seem to be unchanged. Later, when the patient experiences greater language difficulties, remapping shows loss of additional naming sites. Again, naming sites in unusual locations have not yet been observed in such patients.
      These findings are not what one would expect if a lot of rearrangement were possible in adult language cortex. The substantial rearrangements in the sensory strip’s map for the hand and the face in monkeys had raised the hope that the potential for adult rearrangement might be widespread, that the visual cortex was just a “hard-wired” exception. Neither set of observations from naming sites suggest that new ones can develop in unusual cortical areas, although such centimeter-scale measurements with stimulation mapping hardly rule out more subtle sorts of reassignments on the millimeter scale seen in the adult monkey sensory strip with sophisticated eavesdropping on the individual neurons.
      “But still,” Neil observed, “there are several naming sites. That’s redundancy, isn’t it?”
      Not really. A substantial number of naming sites must be destroyed before language fails completely. But gradual loss of even one site, as by a slowly enlarging tumor, seems to be associated with minor language problems. And sudden loss of one site, as in strokes, is often followed by a severe language problem, lasting for months at least. So that isn’t redundancy in the usual sense of the word — like those two backup systems for lowering the landing gear of an airplane, in case the primary hydraulics fail.
      “But I’ve heard that you lose neurons every day of your life. So there must be redundancy, or we’d all become demented.”
      Yes, there’s a slow “normal” loss of neurons in many parts of the brain with age. And when that happens, performance does gradually degrade. It takes longer to do things, and more errors are made.
      But generalizations about the number of neurons lost every day tend to obscure the really interesting differences between brain regions in the loss rate. The substantia nigra, in the depths of the brain, has lost half its neurons by age 75 in normal humans, a time when some nearby regions of brain stem still have 98 percent of their neurons. One part of the hippocampus (one of the oldest cortical structures) tends to lose about one-fourth of its neurons by age 75. But nothing so dramatic happens to the neocortex, unless aided by disease.
      Even though few (if any) new neurons are formed during life, we can retune many circuits to operate with fewer and fewer neurons. As I noted earlier, it is often said that about 80 percent of any given system can be destroyed before symptoms are noticed, so long as it is done very slowly (as in tumor growth) rather than rapidly (as by a stroke interrupting the blood supply).
      “So you can get away with a lot of neuron death in the brain, so long as it happens slowly?”
      That’s the idea. The issue becomes one of the minimum number of neurons needed before compensation fails. For example, about 70 to 80 percent of substantia nigra neurons are missing in patients with Parkinsonism, a level that ordinarily would not be reached until after age 100. It is presumed that some viral disease earlier in life destroyed some neurons there, but that symptoms don’t appear until the age-related decline brings the total to the 70-80 percent level.
      More sudden inactivation of neural circuitry, where the system doesn’t have time to readjust and reassign functions, may result in obvious problems if as little as, say, 30 percent of a system’s neurons are not working well. So this really isn’t redundancy in the usual sense of the word, but some form of distributed functionality.
      “So what happens when you’re getting close to the threshold for trouble? Can you get any warning?”
      You start getting some intermittent problems, here today and gone tomorrow. And difficult to identify. But they’re more likely to occur at some times of the day than others. When the number of functioning neurons in a brain system gets close to the threshold, then fluctuations in function may become noticeable as the patient tires during a long day. Or in response to unrelated illnesses such as head colds. Arms may get weak, a foot may drag, blurred vision may develop, reflexes may become abnormal, remembering a name may become more difficult than usual — depending, of course, on what system is marginal.
      This situation is often encountered during recovery from head injuries or strokes, when patients may have neurological deficits when first awakening in the morning while they are still somewhat groggy. The neurological symptoms then disappear by midmorning. But, as the patient becomes fatigued in the evening, the symptoms reappear. As the system recovers from injury or successfully reassigns function to undamaged brain regions, then the fluctuations in function seldom reach the threshold — and so the symptoms disappear.
      “That reminds me of some auto mechanics I know who try to explain to me that the hardest problems to diagnose are the intermittent ones. Sometimes you can reveal them by making the engine labor, going up a steep hill. And sometimes you just have to wait until something thoroughly fails, in order to find the problem.”
      Brains are often like that, too. That’s why intermittent neurological problems often have to be referred to various specialists, who are a bit better at figuring out the mental equivalents of those steep hills — so as to temporarily fatigue the patient, and more clearly reveal the nature of the deficit. And thus figure out the right diagnosis, on the route to a prognosis and treatment.
      “But auto mechanics have a big advantage over physicians. They can just try replacing one thing after another until the car stops intermittently failing.”
      Or until the customer’s pocketbook runs dry, whichever comes first.
     

Conversations with Neil's Brain:
The Neural Nature of Thought and Language
(Addison-Wesley, 1994), co-authored with my neurosurgeon colleague, George Ojemann. It's a tour of the human cerebral cortex, conducted from the operating room, and has been on the New Scientist bestseller list of science books. It is suitable for biology and cognitive neuroscience supplementary reading lists. ISBN 0-201-48337-8.
AVAILABILITY widespread (softcover, US$12).
Home Page || Science Surf magazine || Table of Contents || End Notes for this chapter || Continue to NEXT CHAPTER