Jost, E. 1 , Mitchel, A. 2 , Weiss, D. 3 & Christiansen, M. 1
1 Cornell University
2 Bucknell University
3 Pennsylvania State University
Previous research has attempted to define how statistical learning operates across modalities; however, it is still unclear to what degree distributional regularities from different modalities are integrated in the service of statistical learning. In the present study, participants passively watched a stream of colored circles as they appeared on a screen while simultaneously listening to a continuous string of syllables. The syllables formed three words containing six syllables each, defined by within-word transitional probabilities (TPs) of 1.0, and between-word TPs of .33. However, the visual stream of circles changed colors such that each sextuplet was divided into two separate triplets if participants integrated statistics across modalities, creating the items of interest. During audio-only testing, participants correctly endorsed grammatical items that were defined by the combined audiovisual statistics above chance, both when paired with lures that had internal auditory TPs of 1.0 (t = 2.090, p = .040), and also with lures that spanned auditory boundaries (t = 5.281, p < .001). This set of results provides evidence that participants are able to integrate transitional probabilities across auditory and visual modalities during learning-though the question remains as to whether or not such integration results in multimodal representations.