[PS-1.22] The Role of Memory Constraints in Regularization of Unpredictable Variation

Saul, J. 1 , Samara, A. 1 , Smith, K. 2 & Wonnacott, E. . 1

1 University College London
2 University of Edinburgh

Natural languages exhibit variation, but this is generally conditioned on linguistic/socio-linguistic factors. One explanation for this is that language learners are biased against learning unpredictable variation. Artificial language learning experiments have demonstrated that both children and adults tend to regularize unconditioned variation (e.g. by boosting the frequency of one variant in their input), but children do so more than adults (Hudson-Kam & Newport, 2016; Samara et al., in press), possibly due to their lower memory capacity (Hudson-Kam & Chang 2009). We compared 6-year-olds' tendency to regularize in a fully artificial language relative to one containing familiar content words. We also measured regularization with trained and untrained words via unprompted and prompted sentence production tasks, to investigate the effects of novelty and ease of lexical retrieval. Children regularized more in the fully artificial language, indicating the influence of memory demands; we also found a positive correlation between children's tendency to regularize and their working memory. We found no effect of trained or untrained words, nor of prompted or unprompted production. Overall, these results support the claim that working memory is involved in regularization. Future work should establish whether this is caused by difficulties in encoding or retrieving the input, or both.