Noise-band vocoding interferes with auditory statistical learning in adults

Lew-Williams, C. 1 , Snyder, H. 2 , Reinhart, P. 2 & Grieco-Calub, T. 2

1 Princeton University
2 Northwestern University

Successful segmentation of speech hinges on the ability to hear: listeners need to discriminate individual phonemes and syllables, and also track relations over time. These prerequisites are critical for adults and infants with cochlear implants, who must acclimate to spectrally degraded electric hearing and break into speech for the first time, respectively. Here, we investigated the role of spectral resolution in statistical learning. Adults listened to a pause-free artificial language that was either unprocessed, or processed with 8-channel noise-band vocoding. On a 2-AFC task testing their ability to distinguish trisyllabic words, partwords, and nonwords, participants were less accurate in identifying words and partwords (vs. nonwords) when exposed to the 8-channel vocoded language (51.8%) relative to the unprocessed language (76%; p<0.001). Consistent with these findings, a syllable identification task revealed that phoneme identification was impaired among adults tested in the 8-channel (59%) vs. unprocessed conditions (98%, p<0.001). Information transmission analysis showed that initial consonant voicing was the primary locus of confusion, suggesting that errors in statistical learning were partially attributable to specific errors in phoneme identification. This experiment complements an ongoing investigation examining statistical learning abilities in infants with cochlear implants, with the translational goal of improving aural habilitation/rehabilitation programs.