Speech perception, productions, and disorders

Speech perception, productions, and disorders

Speech is a unique evolutionary achievement that has played a critical role in human development. We study how this system works and identify its underlying neural mechanisms. In particular, we investigate: individual variation in the hemispheric lateralization of speech perception; the role of attention in speech perception (we evaluate speech-brain synchronization under noisy conditions); the relation between speech perception and production; interactions between listeners and speakers; whether early speech perception skills predict later language abilities; and the development of tools for diagnosis of language disorders. 

Although the left hemisphere is generally dominant for language, there is considerable variation across individuals. We investigate this variation in different domains (print, spoken language) during speech perception, production and during the resting state, using MRI and MEG. 

Recent evidence shows that ongoing oscillatory brain activity phase-aligns to the pseudo-rhythmic temporal patterns of speech. This "brain-speech synchronization" reflects tracking of the slow temporal modulations of the speech envelope. We test whether stronger cortical speech tracking predicts improved speech understanding. We study the role of attention in speech perception by evaluating speech-brain synchronization in ecologically noisy conditions. In particular, we examine how bilinguals deal with multiple speech inputs when the speech streams they have to attend to or ignore are in the same or different languages. We also investigate how the brain responds to intelligible and unintelligible stimuli. 

The usual assumption is that speech perception and speech production share representations – learning something about one will support learning the other. Our research looks at how learning speech is affected by producing speech. Using discrimination tests and eye tracking measures, we investigate whether production during learning impairs perceptual learning. Given that practice in second language learning classes typically involves students repeating what the teacher says, our research results are highly relevant for classroom practice. Classroom instruction also typically includes a substantial reading component. We investigate whether the way we perceive and produce speech sounds is strongly influenced by reading skills and orthographic rules. 
Using behavioural and neural measures, we investigate the impact of speaker identity (e.g., foreign speaker, synthetized voice, avatar) on the listener’s attentional load and information retention. This research will reveal how the properties of a speaker’s voice influence memory, attention, and speech perception (usually studied independently). 
Language disorders are typically diagnosed in pre-school or school, when children’s expressive language abilities can be assessed. However, speech perception skills can already be measured in the first months of a child’s life, providing early indicators of later language abilities such as vocabulary size and literacy. We examine infants’ early ability to discriminate speech sound contrasts that signal changes in meaning in their native language and to disregard contrasts that do not. We combine behavioral and neurophysiological (EEG) techniques to detect individual differences in monolingual and bilingual infants’ discrimination of native and non-native sounds. We relate these potential early biomarkers to language ability in the second year of life. 
Finally, we develop tools for diagnosis and remediation of language disorders with tasks designed in accordance with our latest research results on language processing.