Children?s Word Learning Is Improved in Low Entropy

Lavi-Rotbain, O. & Arnon, I.

The Hebrew University of Jerusalem

During their first year, infants learn to name objects. To do so, they need to segment speech, extract the label and create accurate object-label pairings. While children successfully do so in the wild, they seem to struggle with simultaneously learning segmentation and object-label pairings in the lab. Here, we ask if this difficulty is related to the uniform distribution they were exposed to, where all items are equally frequent. This distribution is less predictable and has higher entropy than that of natural language. Will children learn better in low entropy? We first looked at children's word segmentation in an artificial language (mean-age=10;4 years, N=70), across two levels of entropy: high (with a uniform distribution) and low (created by making one word more frequent). Children's segmentation of both the frequent and infrequent words was facilitated in the low entropy condition. Next, we looked at children?s simultaneous learning of segmentation and object-label pairings (mean age=10;4 years, N=61), across the same two entropy levels. Children?s performance in both tasks was facilitated in the low entropy condition, but the effect was smaller. These results indicate that children are sensitive to entropy changes, and that using uniform distributions in lab-based studies may underestimate their abilities.