Martin, A. E. & Doumas, L. A. A. .
School of Philosophy, Psychology and Language Sciences, University of Edinburgh
From both spoken and written language, our brains must form hierarchical linguistic representations from a linear sequence of perceptual input that is distributed in time. But what computational mechanism underlies such an ability? One possibility is that the brain repurposed a mechanism already at its disposal when abstract, hierarchical representation became an efficient solution to a problem posed by the environment. A highly plausible cognitive model would use the same underlying mechanism to perform multiple, functionally related computational feats, e.g., to parse natural language and reason about it, while exhibiting brain-like processing. We show that a computational model built to learn structured (i.e., symbolic) representations of relational concepts from unstructured inputs (Doumas et al., 2008), can successfully parse sentence stimuli, and, crucially, shows oscillatory unit activation that is highly similar to human cortical activity elicited by the same stimuli (Ding et al., 2016), but not to control stimuli. We argue, as Ding et al. (2016), that this activation reflects formation of syntactic representations, and that temporal binding by systematic firing asynchrony underlies hierarchical representation in the human brain. We conclude that computational and process models must be integrated for increased plausibility in an integrative model of human cognition.