von Grebmer zu Wolfsthurn, S. , Kazanina, N. , Briscoe, J. & Prosser, R.
University of Bristol, UK
Statistical learning (SL) reflects an ability to extract regularities from the environment, evidenced in various domains (Conway & Christiansen, 2005) including speech perception (Johnson & Jusczyk, 2001). Our study uses SL to investigate the units of speech perception using implicit and explicit measures of SL combined with EEG (Batterink & Paller, 2016; unpublished).
In the training phase, participants listen to artificial speech streams devoid of intonation cues (e.g. peadoosabezogufootameanevuko...) while EEG is recorded. The streams contain either syllable-based or phoneme-based statistical regularities (Bonatti, Peña, Nespor & Mehler, 2005).
In the testing phase, participants complete a rating task and a forced-choice task (Bonatti et al., 2005) testing explicit learning and a target-detection task testing implicit learning of words (Franco et al., 2015). We expect an increase in signal power at the word frequency (1.3 Hz) as opposed to the syllable level (4Hz) for both types of stream.
Whereas EEG results are still being analysed (n=15), behavioural analysis to date (n=6) show the listeners? ability to extract both statistical regularities as reflected in implicit and explicit tasks.
Results have implications for theories of speech perception, multilingual speakers and for diagnosis of clinical populations (Obeid et al., 2016).