By François Grosjean
Editor’s Note: This interview, conducted by François Grosjean, originally appeared on the Psychology Today blog, Life as a Bilingual.
Research on the bilingual brain has gone through several stages over the years: the study of aphasic polyglots, experimental work on language lateralization in bilinguals, and now brain imaging studies that examine language processing and neural structures and connections between them. One of the leading researchers in this field is Ping Li, professor of psychology and linguistics at Penn State. He works on the neural and computational bases of language representation and learning and has kindly accepted to answer a few of our questions. We wish to thank him wholeheartedly.
Before addressing the issue of what is different in the bilingual brain, as compared to the monolingual brain, can you quickly go through what is clearly similar?
It may be helpful to say at the outset that we are talking about the human brain, bilingual or not, which is the only brain that can learn and use complex natural languages for communication. No brain of any other species on our planet has language like ours, despite claims that other animals may also have sophisticated communication systems.
Against this backdrop, then, the similarities between the bilingual and monolingual brains will be more important than the differences. For example, given the physical constraints of the human brain — its neuroanatomical substances — we must be using more or less the same neural structures to learn and use different languages, whether these are English, Chinese, French, or Spanish. In other words, we cannot imagine that each of the world’s 7000+ languages occupies a different part of the brain. Now, this is not to deny that different languages will engage the neural structures in different ways, a position I myself dearly embrace and which you described elsewhere (see here).
Does evolution also play a role?
From an evolutionary perspective, human language has had a long history — at least one hundred thousand years — and has evolved into a very complex communicative system. Evolution has determined that something as complex as human language simply cannot be supported by a single area in the brain. Rather, a great deal of brain resources needs to be dedicated to language. Recent neuroimaging evidence shows that language processing involves not only the classical Broca’s and Wernicke’s areas but the entire brain, from frontal to temporal to parietal lobes.
Concerning this, note that there is an area in the brain that we can call ‘the visual cortex’ (for visual processing), but there is no such area that we can call ‘the language cortex,’ to the disappointment of some who look for a ‘language gene’ or a ‘language area.’ Because of this, there is also no ‘monolingual cortex’ or ‘bilingual cortex.’ The most plausible scenario, as has been argued by David Green and his colleagues, is that the brain uses the same neural structures and resources to handle different languages, but in different ways, even in the same individual.
For many years, we were led to believe that bilinguals were language lateralized differently than monolinguals? Is there any truth to that?
Although the idea of different brain lateralization patterns for bilinguals versus monolinguals made sense initially, like many intuitively appealing ideas, the more we know about the linguistic brain, the more unlikely this view has now become.
I want to illustrate my point with one simple example. Take monolingual English speakers who learn Chinese and acquire lexical tone, an essential aspect of the language for listening and speaking Chinese. Now, we know that native Chinese speakers typically use the left hemisphere to process lexical tones (although the right hemisphere is also engaged to some degree), given that tones are phonological units marking different word meanings (for example, /pa/ means ‘squat’ if pronounced in Tone 1 and ‘crawl’ in Tone 2).
Native English speakers learning Chinese initially treat such tonal differences simply as acoustic variations (high pitch in Tone 1 and low-then-high pitch in Tone 2), and use the right hemisphere to process them. But once they are fluent bilinguals, they start to treat these tonal differences as phonological, and not just acoustic, variations. The difference between Tone 1 vs. Tone 2 is now just as important as that between /ba/ and /pa/. Hence, there is a stage where bilinguals shift from relying on the right hemisphere to the left hemisphere, as lexical tones become linguistically meaningful to them.
To continue reading this interview, please click here to head over to Psychology Today.
Members of the news media interested in talking to Li should contact Tori Indivero at 814-865-6071 or vmi1@psu.edu.
Featured image: “Prolegomena to an evidence-based policy for software patents” by opensource.com, via Flickr CC BY-SA 2.0