[PS-3.9] Non-adjacent dependency learning over different segments in speech

Weyers, I. & Mueller, J. L.

University of Osnabrueck

Processing dependencies between distant speech elements is central to human language (e.g. he sings). It has been suggested that vowels are the segmental unit that is informative when extracting structural regularities, while consonants guide lexical processes. Support for this claim stems mainly from artificial grammar learning (AGL) experiments employing variable repetition-based ABB or ABA structures. In the present EEG-study, 31 adult participants were exposed to an AXB grammar coded over specific trisyllabic sequences (biXpe, goXku). Learning phases alternated with a grammaticality judgment task, in which we tested for successful generalization of this regularity not only on the syllable, but also on the vowel (xiXxe, xoXxu) and consonant (bxXpx, gxXkx) level. While the behavioral results indicate that non-adjacent dependencies between specific syllables in spoken language are not readily generalized from syllables to vowels and even less to consonants, the EEG results reflect early discrimination effects for both syllables and consonants, but not for vowels. In contrast to the suggested rule-related function of vowels in auditory speech processing, consonants emerge as the decisive element in the present study. We argue that when learning and memorization of concrete forms is necessary in order to abstract dependency relations, consonants are actually highly informative.