Comparing computational mechanisms underlying visual statistical learning in honeybees and humans

Fiser, J. . 1 , Avarguès-Weber, A. 2 , Finke, V. . 2 , Nagy, M. 1 , Szabó, T. 1 & Dyer, A. . 3

1 Department of Cognitive Science, Central European University, Budapest Hungary, Október 6 utca 7, Budapest 1051 Hungary
2 Centre de Recherches sur la Cognition Animale (CRCA), Centre de Biologie Intégrative (CBI), Université de Toulouse; CNRS, UPS, 118 Route de Narbonne, 31062 Toulouse, France
3 Bio-inspired Digital Sensing (BIDS) Lab, School of Media and Communication, RMIT University, Melbourne, VIC, Australia

Do honeybees (Apis mellifera) known to be excellent visual learners encode statistical information about visual patterns the same way as humans do? If so, humans? superior cognitive skills have to depend on some other factors, if not, the nature of the differences can provide hints about the characteristics of human learning that make it so versatile. We developed a new version of the classical visual statistical learning paradigm, and used it for a systematic comparison of learning visual patterns in humans and bees under the same conditions. We found that, in contrast to humans, bees do not automatically encode conditional only co-occurrence contingencies within novel visual scenes. While with increased exposure, bees shifted from sensitivity to frequencies of elemental features of the scenes only to sensitivity to joint frequencies, they never develop an automatic sensitivity to predictability between elements. Through computational analyses we also show that honeybee?s learning behavior can be captured by a simple, fragment-based memory-trace model, while capturing human visual learning results requires a probabilistic chunk-learning model. Thus, our results suggest that rich internal predictive representation developed by possibly probabilistic learning processes might be key in humans? sophisticated cognitive abilities.