Motor-based prediction in speech.

Tian, X. 1, 2, 3

1 New York University Shanghai
2 Shanghai Key Laboratory of Brain Functional Genomics (Ministry of Education), School of Psychology and Cognitive Science, East China Normal University
3 NYU-ECNU Institute of Brain and Cognitive Science at NYU Shanghai

We need to link language and motor systems for producing and controlling speech. One of such linking mechanisms has been hypothesized as a motor-based prediction process. That is, auditory consequences of speech production can be predicted via a top-down process that is generated from the motor system. The predicted speech results are compared with feedback to constrain and update production. In a series of studies, we tested the neural dynamics and critical assumptions of this model, which are that motor-based prediction process can induce mental representation at multiple levels of speech hierarchy and interact with speech perception at processing of different speech and acoustic attributes. Evidence from behavioral, electroencephalography (EEG), magnetoencephalography (MEG) and functional magnetic imaging (fMRI) experiments using novel imagined speech paradigms suggest that top-down motor-based processes can generate precise prediction in phonological level, as well as in acoustic levels such as attributes of pitch and even loudness. Moreover, such multiple-level prediction can modulate perceptual behavioral and neural responses at corresponding speech levels. These consistent behavioral, electrophysiological and neuroimaging results suggest that the top-down induced neural representation during production converges to the same multi-level representational format as the neural representation established during perception. Such a coordinate transformation between motor and language systems in a top-down motor-based predictive process forms the neurocomputational foundation that enables the interaction with a bottom-up process in speech production monitoring and control.