Taylor, J. S. H. 1, 2 , Davis, M. H. . 3 & Rastle, K. 2
1 Aston University, UK
2 Royal Holloway University of London, UK
3 MRC Cognition and Brain Sciences Unit, UK
Learning to read involves acquiring letter representations that abstract across case, size, retinal location, and, critically, position. This allows generalization - B in BAD is the same as B in CAB. Neuroscientific research suggests that orthographic representations become increasingly abstract from posterior to anterior ventral occipitotemporal cortex (vOT; Dehaene et al., 2005), and computational models suggest how position abstraction might be achieved, e.g., open-bigram (Whitney, 2001) or spatial (Davis, 2010) coding. We used Representational Similarity Analysis (RSA; Kriegeskorte, Mur, & Bandettini, 2008) to test the predictions of these computational models against vOT neural response patterns to newly learned written words. 24 adults learned to read 48 new words written in artificial orthographies. Following two weeks of training, neural activity was measured with fMRI whilst they silently read the learned words and made occasional meaning judgements. RSA revealed that posterior vOT did not abstract across letter-position - neural patterns were only similar for words containing the same letters in the same position. In contrast, mid-to-anterior vOT showed position abstraction - neural patterns were similar for words containing the same letters across position. This reveals how the ventral reading stream abstracts over letter-position, and shows that this abstraction develops with only two weeks of training.