[PS-3.4] Set Size Does Not Matter. Entropy Drives Rule Induction in Non-Adjacent Dependency Learning

Grama, I. 1 , Radulescu, S. 2, 3 , Wijnen, F. 2, 3 & Avrutin, S. 2, 3

1 University of Amsterdam
2 Utrecht University
3 Utrecht Institute of Linguistics OTS

Increasing input entropy (H, a measure of variability) has been shown to drive learners? gradual transition from item-bound learning to rule induction in an artificial grammar (Radulescu, Wijnen, & Avrutin, in prep).

In this study we probed Radulescu et al?s entropy model in non-adjacent dependency (NAD) learning. Participants listened to an aXb language where they had to learn item-bound dependencies between a and b, while also generalizing a_b dependencies to novel X words. Unlike in Gomez (2002), we kept X set size constant (18 Xs) and manipulated entropy by varying the combinations of each a_b frame with either all Xs or only subsets of Xs, to create three Entropy Conditions: Minimum (H=3.52), Medium (H=4.27) and Maximum Entropy (H=4.7).

We found a significant effect of Condition, with better generalization of NADs in the Maximum Entropy Condition compared to the other Conditions. Learning was significantly above chance in the Maximum Entropy Condition, but not in the Minimum and Medium ones, although the set size of Xs was kept constantly large in all three Conditions.

These results pinpoint the source of generalization: it is not set size (Gomez 2002), but input entropy, since generalization of non-adjacent dependencies requires a critical input entropy.