Longterm semantic adaptation as statistical learning

O'Seaghdha, P. 1 , Munoz-Avila, H. . 2 & Seip, T. 2

1 Lehigh University, Psychology and Cognitive Science
2 Lehigh University, Computer Science and Engineering

There is a growing consensus that longer term semantic facilitation and interference in word production are best explained as two facets of the same adaptive learning mechanism (e.g., Oppenheim et al., 2010). Learning strengthens conceptual-lexical links to selected words and conversely degrades links to competing unselected ones. However, we still know little about the properties of this learning or of the nature of the representations on which it operates. Recent research from our group shows that learning can operate on very sparse semantic representations (Packer et al., 2013), that interference is augmented by nonsemantic (phonological) similarity between categorically related words (Frazer et al., 2014), and that it arises for unrelated words that are merely believed to be relevant to a goal (Preusse & O?Seaghdha, in prep). These findings suggest that learning is driven by the competitor status of nontarget words rather than by intrinsic similarity. We present fully featured computational simulations of semantic interference that operates broadly on competitor links rather than being restricted to features shared with a target. The models can account for the full range of empirical phenomena. They also provide insights into the nature of semantic adaptation.