[PS-2.23] The emergence of chunks by regularizing transition probabilities

Ferdinand, V. 1 , Kirby, S. 2 & DeDeo, S. 3, 1

1 Santa Fe Institute
2 University of Edinburgh
3 Carnegie Mellon University

We extend the notion of linguistic regularization to a choice reaction time task to explore how chunks emerge during sequence learning. Learning occurs when an agent finds a way to compress the data in the world around it and regularization occurs when an agent over-compresses the data: inducing stronger patterns than those that exist veridically. We present a model of sequence learning in which learners regularize by over-representing higher-frequency transition probabilities and under-representing lower-frequency transition probabilities. This behavior creates sequences with regions of low-entropy transitions bounded by regions of high-entropy transitions (i.e. chunks). We fit this model to empirical data from a sequence learning task with a training phase (where participants observe 6 lights flashing in a sequence on an ipad and tap the lights as they appear) and a testing phase (where participants freely produce a sequence that is similar to the one observed). We find that training-phase reaction times predict regularization events (and we discuss the information theoretic reasons behind this finding), however participant behavior ran counter to our predictions: they significantly under-produced bigrams with high transition probabilities during the testing phase. Plans for further experiments are discussed.