Vasconcelos, M. 1 & Pinheiro, A. P. 2
1 Neuropsychophysiology Lab, School of Psychology, University of Minho, Portugal
2 Voice, Affect and Speech Lab, Faculty of Pscyhology, University of Lisbon, Portugal
Music and speech are arguably the most complex human auditory-motor functions and recent literature demonstrated that musical training impacts the processing of speech at various levels. This study tested the effects of musical training on the neural processing of statistical regularities in linguistic input. In order to examine the neural signature of speech segmentation, we engaged musicians and non-musicians in an artificial (sung) language learning task while acquiring electrophysiological data. Participants were instructed to carefully listen to a continuous stream of trissillabic prosodic words. Transitional probabilities between syllable pairs were the only cues to segmentation. To behaviorally evaluate statistical learning, participants completed an auditory forced-choice task. Musicians outperformed non-musicians in the learning of prosodic words and only the formers' level of performance was significantly above chance. Importantly, at the electrophysiological level, a negative component peaking between 250-400 ms (N400), after words' onset in the stream, emerged for both groups with musicians showing increased N400 amplitude. These results suggest that musical expertise leads to greater sensitivity to the statistical properties of linguistic inputs at the neural level and enhanced capacity to segment speech.