Modelling short- and long-term statistical learning of music as a process of predictive entropy reduction

Hansen, N. C. 1, 2, 3 , Loui, P. 4 , Vuust, P. 1, 2 & Pearce, M. 5

1 Music in the Brain, Center of Functionally Integrative Neuroscience, Aarhus University Hospital, Denmark
2 Royal Academy of Music Aarhus, Denmark
3 Department of Aesthetics and Communication, Aarhus University, Denmark
4 Department of Psychology and Program in Neuroscience and Behavior, Wesleyan University, USA
5 Centre for Digital Music and Centre for Research in Psychology, Queen Mary, University of London, UK

Statistical learning underlies the generation of expectations with different degrees of uncertainty. In music, uncertainty applies to expectations for pitches in a melody. This uncertainty can be quantified by Shannon entropy from distributions of expectedness ratings for multiple continuations of each melody, as obtained with the probe-tone paradigm. We hypothesised that statistical learning of music can be modelled as a process of entropy reduction. Specifically, implicit learning of statistical regularities allows reduction in the relative entropy (i.e. symmetrised Kullback-Leibler Divergence) between listeners? prior expectancy profiles and probability distributions of a musical style or of stimuli used in short-term experiments.

Five previous probe-tone experiments with musicians and non-musicians were revisited. In Experiments 1-2 participants rated expectedness for tonal melodies and Charlie Parker solos. Experiments 3-5 tested participants before and after 25-30 mins exposure to 5, 15 or 400 melodies generated from a finite-state grammar using the Bohlen-Pierce scale.

As predicted, we found between-participant differences in relative entropy corresponding to degree and relevance of musical training, and within-participant decreases in entropy after short-term statistical learning of novel music. Thus, whereas inexperienced listeners make high-entropy predictions, statistical learning over varying timescales enables listeners to generate melodic expectations with reduced entropy.