Saltar al contenido | Saltar al meú principal | Saltar a la secciones

ESCOP 2011, 17th MEETING OF THE EUROPEAN SOCIETY FOR COGNITIVE PSYCHOLOGY 29th Sep. - 02nd Oct.

Learning novel grammars, vocabularies and orthographies: developmental and neural perspectives.

Friday, September 30th,   2011 [08:30 - 10:30]

SY_05. Learning novel grammars, vocabularies and orthographies: developmental and neural perspectives

Taylor, J.

MRC Cognition and Brain Sciencesn Unit, Cambridge, UK

One of the most impressive aspects of human language is our ability to learn both item-specific and generative or rule-governed knowledge. However, the mechanisms underlying these processes have been hard to establish through natural language studies in which many factors confound comparisons of regular/irregular and familiar/unfamiliar items. Six papers investigate the cognitive and neural mechanisms that support the acquisition of item-specific and generative knowledge, asking how systematicity and meaningfulness influence within- and cross-modal learning. Answering these questions will help resolve long-standing theoretical debates (e.g. between localist/distributed accounts of cognition), have implications for curriculum design (what are the merits of learning abstract rules vs. context/meaning in reading or second language learning?), and advance us towards the goal of discovering the cognitive and biological foundations of language and literacy acquisition. The studies presented all use artificial language learning paradigms to investigate acquisition and generalization of orthography, vocabulary and syntax. Our innovative methods give complete control over exposure to the statistics of the languages being learned and enable us to examine language learning as it unfolds, complementing and extending existing developmental and cognitive neuroscience research. Papers one to three explore the factors affecting children’s learning, starting with the acquisition of new symbol-sound pairings (Robin Litt), followed by novel written words (Fiona Duff), and ending with new grammars (Elizabeth Wonnacott). Paper four uses a multimodal learning paradigm to investigate word segmentation and word-referent mapping in adults (Toni Cunillera). Finally, papers five and six combine artificial language learning with brain imaging techniques to explore the similarities/differences between the neural systems supporting newly learned artificial and native grammars (MEG experiments - Annika Hulten), and orthographic versus object-label learning (fMRI experiments - Jo Taylor). In sum, we show how convergent evidence from development and neurobiology provides crucial evidence for understanding reading and language acquisition.

 

TALKS

SY_05.1 - What can your child’s paired associate learning tell us about his reading ability?

Litt, R. , Nation, K. & Watkins, K.

University of Oxford, UK

Previous research has established a relationship between poor reading and Paired Associate Learning (PAL), a task in which participants learn stimulus-response mappings. Whether this relationship results from differences in verbal learning, or the ability to establish orthography-phonology mappings, remains unclear. The current study investigated the hypothesis that children with dyslexia have specific impairments in crossmodal (visual-verbal, verbal-visual), but not unimodal (verbal-verbal, visual-visual) PAL. Forty-five children (15 dyslexic, 15 CA controls, 15 RA controls) aged 8-11 were matched for nonverbal intelligence and tested across four PAL conditions each with 6 stimulus pairs: Visual-verbal, verbal-verbal, visual-visual, and verbal-visual. Novel abstract symbols and nonwords were used, eliminating the role of previous learning/knowledge and allowing us to simulate the earliest stages of letter-sound learning. PAL was tested over four weeks, with one PAL condition per week to minimize interference between conditions. On day one of each week, participants completed a computerized PAL task, consisting of two presentation trials and five test trials with feedback. The next day, participants completed a delayed recall and yes/no recognition task. Data were analyzed using mixed factorial ANOVAs (comparing group and performance across conditions), and multiple regression (examining the relationship between PAL and reading ability). Children with dyslexia performed equally as well as CA controls in the nonverbal condition (visual-visual), but significantly lower in conditions with a verbal component (visual-verbal, verbal-verbal, verbal-visual). Performance patterns were similar to RA controls. Contrary to the hypothesis, performance was not selectively impaired in crossmodal PAL. However, the finding of impaired verbal-visual PAL, which required no verbal output, suggests that verbal response demand alone cannot explain the findings. Two alternative hypotheses, one of verbal domain deficits, and the other of an additive effect of verbal task demands and crossmodal PAL are discussed.




SY_05.2 - The role of children’s phonological and semantic knowledge in learning to read words

Duff, F. & Hulme, C.

University of York, UK

This paper presents two experiments that focus on the relationship between oral language skills and learning to read single words in 5- to 6-year-old children. According to theories of reading development, both phonological and semantic knowledge about a word should predict how easily children learn to read it. However, few developmental studies have considered item-level relationships when assessing the impact of linguistic knowledge on learning to read. Furthermore, in relation to oral pre-exposure paradigms, it remains unclear as to whether semantics exerts any influence on learning beyond the effect of phonology. In Experiment 1 children learned to read real but unfamiliar words varying in spelling-sound consistency and imageability. Consistency affected performance on early trials while imageability affected performance on later trials. Individual differences among children in phonemic awareness on the trained words were related to learning, and knowledge of a word’s meaning predicted how well it was learnt. These results confirm, across participants, the importance of phonological skills for learning to read; but crucially suggest that within-participants, item-level semantic knowledge facilitates learning to read single words. In Experiment 2, phonological and semantic knowledge of nonwords was manipulated prior to word learning. Familiarization with a word’s pronunciation facilitated word learning, but there was no additional benefit from being taught to associate a meaning with a nonword. In view of Experiment 1, it is argued that semantic knowledge does influence the process of learning to read single words, but that more naturalistic methodologies may be needed in order to detect this effect in oral pre-exposure paradigms.




SY_05.3 - Constraining generalization in artificial language learning

Wonnacott, E.

University of Oxford, UK

Successful language acquisition involves generalization, but learners must balance this against the acquisition of lexical constraints. For example, native English speakers know that certain noun-adjective combinations are impermissible (e.g. strong winds, high winds, strong breezes, *high breezes). Another example is the restrictions imposed by verb sub-categorization, (e.g. I gave/sent/threw the ball to him; I gave/sent/threw him the ball; I donated/carried/pushed the ball to him; * I donated/carried/pushed him the ball). How do children learn these exceptions? (Baker, 1979). The current work addressed this question via a series of Artificial Language Learning experiments with 6 year olds. The results demonstrated that children are sensitive to distributional statistics in their input language and use this information to make inferences about the extent to which generalization is appropriate (cf. Braine, 1971; Wonnacott, Newport & Tanenhaus, 2008). In particular, there was evidence that children assessed whether the choice of linguistic structures depended upon the particular words with which they had occurred, and this affected their learning of arbitrary exceptions. The results are interpreted in terms of a rational Bayesian perspective on statistical learning (Perfors, Tenenbaum & Wonnacott, 2010).




SY_05.4 - Bridging the gap between speech segmentation and word-to-world mappings: Evidence from an audiovisual statistical learning task

Cunillera, T.

University of Barcelona, Spain

In a recent study conducted by Cunillera, Laine, Càmara and Rodríguez-Fornells (2010) we raised the question of how second language learners are able to segment words and map them to a meaning. Can these two processes occur simultaneously? We explored this unresolved issue by using a new multimodal learning paradigm (see Cunillera et al., 2009) that tracked the first steps in learning new words and their mappings to visual referents. It encompassed a continuous audiovisual stream in which transitional probability of syllables was the only acoustic cue available to segment the stream into words, and a visual stream of object images that accompanied the novel words. The object images were systematically varied in terms of constancy of word-picture association and meaningfulness. The results of the experiments indicated good word-referent mapping and word segmentation after short exposure to the audiovisual stream, and suggest that i) mapping words with pictures is more effective when the visual referents are meaningful objects, ii) in word segmentation, the consistency of the word-picture association affects segmentation performance, iii) the effect of associative strength on segmentation performance was most prominent with meaningful objects, and iv) detection of temporal contiguity between multimodal stimuli may be useful in second-language learners not only for facilitating speech segmentation, but also for detecting word-object relationships in natural environments. All in all the present results suggest that word segmentation and word-referent mapping are closely related processes: word segmentation is affected by the consistency of the mapping relationship and both segmentation and mapping can be accomplished under the same short exposure.




SY_05.5 - Sentence-level speech production: Evidence from a newly learned miniature language and L1

Hulten, A. 1, 2

1 Brain Research Unit, Low Temperature Laboratory, Aalto University, Finland
2 Department of Psychology and Logopedics, Åbo Akademi University, Turku, Finland

The human ability to share novel ideas and thoughts with one another stems from characteristics of human language: the powerful combination of words and syntax enables us to understand and produce an unlimited array of utterances. Applying the rules of syntax, we build up sentences from their lexical constituents and their meanings, arriving to the compositional semantics of the whole sentence. Intriguingly, a person who speaks several languages may shift between different sets of rules as languages may greatly vary in their grammatical structure. The underlying neural implementations of these processes are far from resolved. In the present study, healthy adult volunteers learned a miniature language (Anigram) with a grammar markedly different from that of their mother tongue (Finnish), in four daily training sessions. Thereafter, during magnetoencephalography scanning, participants generated sentence-level description of a pictured event. Sentence vs. word sequence generation was tested separately for each language. The task was divided into a planning phase (picture presentation) and a cloze test (corresponding words overlaid on the picture) that ended with a prompt to generate the final word. Processing of the two languages differed only during the planning stage, as stronger activation for Anigram in the left angular and inferior parietal cortex, interpreted as an increased working memory load for the preparation of novel language output. Production of the sentence-final word, calling for retrieval of rule-based inflectional morphology, was accompanied by increased activation in the left middle superior temporal cortex and did not differ between the languages. Furthermore, the results suggest a prominent role for right hemisphere temporal regions in terms of integrative processing and in discriminating between word sequences and sentences. The study has implications for models of language learning and provides a new approach for the study of the neural mechanisms of sentence-level speech production.




SY_05.6 - Learning object names activates the visual word form area more than learning to read: Evidence from artificial language learning and fMRI

Taylor, J. 1 , Rastle, K. 2 & Davis, M. 1

1 MRC Cognition and Brain Sciences Unit, Cambridge, UK
2 Royal Holloway University of London, UK

Dehaene and colleagues propose that the left fusiform gyrus (LFG) contains a specialised visual word form area (VWFA) representing abstract orthographic units. Conversely, Price and others argue that the LFG processes both visual objects and words, attributing word-specific responses to task-related top-down modulation. We combine an artificial language paradigm with fMRI, providing a unique opportunity to explore ventral-temporal specialisation whilst learning novel words and objects. Examining learning maximises task differences: words must be decoded using systematic spelling-sound mappings whereas objects must be arbitrarily associated with their names. Twenty healthy adults learned new names for 24 novel objects and to read 24 new words written in novel symbols, whilst in an MRI scanner. Learning consisted of interleaved phases of training (paired visual-spoken forms) and testing (read words/name objects). Participants learned the trained items (words-69%, objects-68% correct) and generalized their orthographic knowledge to untrained words (62% correct). Relative to unimodal listening/viewing, cross-modal associative learning of visual-spoken form pairings activated bilateral superior parietal cortices, fusiform gyri and left hippocampus (p<.01 whole-brain corrected). These regions active for learning objects and words were used as a search volume for subsequent analyses. The LFG (including VWFA) was more active when learning object-name associations than when learning to read words. The reverse contrast revealed activation in bilateral superior parietal cortices. During a final test phase, covert word reading versus object naming showed the same patterns of dissociation in LFG and superior parietal areas, with additional activation for words compared to objects in a left mid-occipital region previously associated with pseudoword reading. The weaker involvement of the LFG in orthographic relative to object-label learning perhaps challenges the idea that this region is specialised for reading. Conversely, strong involvement of parietal regions in orthographic learning suggests a focus for future neuroimaging research on leanring to read.




©2010 BCBL. Basque Center on Cognition, Brain and Language. All rights reserved. Tel: +34 943 309 300 | Fax: +34 943 309 052