Statistical learning of categories is not specialized for speech

van der Ham, S. & de Boer, B.

Vrije Universiteit Brussel

A central issue in speech evolution research is which aspects of human cognition have undergone selective pressure related to speech. We investigate statistical learning of categories, a learning mechanism that operates across domains. Previous research suggests that in artificial grammar learning, statistical learning may operate differently across modalities. For speech categories, computational models predict that unless there is a pressure to maintain differences between sound categories, a general bias toward the center of a bounded distribution operates at all times. Taken together, we expect different categorization behavior for the auditory modality compared to tactile and visual modality, resulting in more extreme categories.

We present the results of an experiment in which participants learn a set of signals from a bimodal distribution through different modalities: audition, vision, and sense. The critical feature is duration. After a training phase in which they learn variations of the two categories, participants do a categorization and production task. The preliminary results show no differences between modalities; in both tasks, participants revealed similar categorization and production behavior across domains. Our findings suggest that statistical learning of signal categories is a mechanism not specialized for the auditory modality; any specialized behavior may be due to training.