Optimal Bias Recovery Conditions in Statistical Learning Experiments

Thompson, B. 1, 2

1 Vrije Universiteit Brussel
2 The University of Edinburgh

A central objective in the psychological sciences is to uncover the inductive biases that guide statistical inference in human learning. One methodology that is rapidly gaining popularity involves using model-fitting techniques to infer estimates of bias parameters in computational cognitive models from experimental data. For example, this approach has facilitated estimates of prior distributions in Bayesian computational models of statistical inference over a wide range of psychological domains, such as language acquisition, category learning, causal learning, and frequency learning. Here I note that this approach to bias estimation can be systematically improved or hindered by the particular training conditions encountered by experimental participants. I present a simple analytic tool to help experimenters determine the optimal training conditions for reverse-engineering an inductive bias. I derive an 'Honesty' metric: given a model of statistical learning and a model-fitting procedure, we can determine pre-hoc which training regimes encourage inferences and behaviour that exhibit the strongest signatures of an underlying inductive bias. I demonstrate this principle by simulating a series of artificial-language learning experiments in which (simulated) participants' biases are known, and show that our conclusions can be dramatically influenced by the kinds of statistical regularities participants encounter during training.