PS_3.036 - Multisensory statistical learning

Glicksohn, A. & Cohen, A.

Psychology. Hebrew University of Jerusalem. Jerusalem, Israel.

Statistical learning concerns detection of regularities distributed in space/time. Previous studies typically focused on unisensory learning. Here, we examine multisensory learning over time. In a preliminary experiment, subjects were familiarized with either a single visual stream composed of ‘triplets’ - reoccurring successive shapes, or a single auditory stream composed of ‘words’ - reoccurring syllables. Tests contrasting a triplet / word with random shapes / syllables revealed a similar rate of unisensory visual and auditory learning. In Experiments 2-3 subjects were familiarized with a combined Audio-Visual stream, where each shape appeared simultaneously with a syllable, and each triplet uniquely matched a word. When subjects were tested on separate visual and auditory tests (Experiment 2), they showed reduced learning, particularly in the auditory domain. However, when subjects were tested on a multisensory test contrasting a word-triplet combination with a triplet-random syllables or word-random shapes combination (Experiment 3), they showed a high rate of learning. Subsequent experiments revealed that the strongest learning occurs between simultaneous stimuli either within or across senses, and that it can mask learning regularities over time within modalities. Multisensory learning over time is minimal. We suggest that learning requires grouping cues, with simultaneous temporal cues dominating other within-modality grouping cues.