Representation of language-related information across the human brain
The ease with which we understand speech belies the complexity of speech processing in the brain. Successful speech perception requires extracting relevant spectral and temporal components of the sounds, combining phonemes into words, and extracting meaning from the words. However, these various aspects of speech have usually been studied in isolation or in reduced contexts, and little is known about how different aspects of language-related information are mapped systematically across the human brain. To address this problem we recorded BOLD fMRI responses from five subjects while they listened to over two hours of natural speech. We then used a voxel-wise modeling approach developed previously in our laboratory (see Huth et al. 2012, Neuron v. 76, p. 1210-1224) to simultaneously map the representation of four different aspects of speech onto the cortical surface: acoustic structure, articulations, syntax and semantics. The spectrogram model accurately predicts brain activity in early auditory areas. The articulatory model also accurately predicts well in early auditory areas, in Broca’s area and in motor areas. The syntactic model predicts well in higher auditory areas and some portions of prefrontal cortex. The semantic model predicts well in many different areas of cortex, and in both hemispheres. To visualize the semantic maps we used dimensionality reduction and clustering methods developed previously in our laboratory. Visualization of the semantic space and cortical maps reveals highly complex distributions of semantic information in temporal, parietal, and prefrontal cortices that are similar across different individuals. These maps form a complex pattern of semantic gradients and areas, many of which have not been described previously. In sum, our study reveals in unprecedented detail how language-related information is represented in the human brain, and suggests that speakers of English may share common representations of language meaning.
About the speaker
Jack Gallant is Professor of Psychology at the University of California at Berkeley, and is affiliated with the graduate programs in Bioengineering, Biophysics, Neuroscience and Vision Science. He received his Ph.D. from Yale University and did post-doctoral work at the California Institute of Technology and Washington University Medical School. His research program focuses on quantitative computational modeling of the human brain, particularly under naturalistic conditions. These models reveal how various aspects of sensory and cognitive information are mapped across the brain, and how these representations vary across individuals. A side benefit of this approach is that the resulting models can be used to decode information in the brain in order to reconstruct mental experiences. Although most of the current work in the laboratory involves functional MRI, this computational framework can be used to understand and decode brain activity measured by many different methods and in different modalities.