The Retention & Recognition Model: A Processing Model of Artificial Language Learning

G. Alhama, R. & Zuidema, W.

Institute of Logic, Language and Computation; University of Amsterdam

Although there is a wealth of experimental data on 'statistical learning' and 'rule learning', many detailed questions about the mechanisms underlying them are best addressed with an approach that combines experiments with modelling. To this end, we present the Retention&Recognition Model (R&R), a probabilistic chunking model based on equations that describe the probability of recognizing (which increases with how often the subsequence as been recognized before) and retaining subsequences from a familiarization stream. We show that our model can account for a wide range of existing experimental findings, and we confirm a specific prediction of the model (on a skew in response distributions) with new experimental data. We compare R&R with existing exemplar-based, Bayesian and neural network models, using the evaluation suggested in Frank et al (2010, Cognition) and obtain even higher correlations (94%, 92%, 95% for Exp1,2,3) with human data than previous models (including all models reviewed in French at al, 2011, Psych. Rev.). However, our analysis also raises some questions about the various ways models in this domain are evaluated. We therefore call for more collaboration between experimentalists and modellers and suggest specific experiments that better distinguish between rivalling theoretical accounts of learning in ALL experiments.