Saltar al contenido | Saltar al meú principal | Saltar a la secciones

WILD - Workshop on Infant Language Development 20th Jun. - 22nd Jun.

Poster Session 3 (with coffee break)

Saturday, June 22nd,   2013 [10:30 - 12:00]

Behavioral and ERP responses to prosodic boundaries in German learning infants: Evidence for an early adult-like cue weighting

Höhle, B. , Wellmann, C. , Holzgrefe, J. & Wartenburger, I.

University of Potsdam

This paper presents research on German infants? development of prosodic boundary detection. Six- and 8-month-olds? sensitivity to frequent prosodic boundary cues, i.e. pause, pitch and lengthening as single or coordinated cues was tested using behavioral and ERP measures. Short auditory stimuli that consisted of three coordinated names (e.g. Manu and Lilli and Mona) were presented with or without acoustic cues indicating a prosodic boundary after the second name. In naturally produced sequences these boundaries were marked by pause, a pitch rise and lengthening. By acoustic manipulations stimuli were created in which these cues either occurred in combination or in isolation. Using the head turn preference procedure we found that 6-month-olds only detected the boundary when it was marked by all three cues. In contrast, 8-month-olds also detected boundaries marked by a combined pitch rise and lengthening but not those solely marked by a pitch rise. However, ERPs recorded in 6-month-olds while presented with the same materials revealed a positive deflection for stimuli with combined pitch rise and lengthening but without pause - a pattern that did not occur with sequences that only contained a pitch rise or none of the prosodic boundary cues under consideration. This pattern mirrors ERP effects found in adults, where a so-called closure positive shift (CPS), reflecting the perception of a major prosodic boundary, was also found for the combined pitch and lengthening cue, but not for the sole pitch cue. It is corroborated by behavioral findings from adults and a corpus analysis which showed that boundaries solely marked by pitch are not treated as such by German adult listeners and that prosodic boundaries marked by pitch plus final lengthening are the most frequent boundary type in spoken German. Overall these data support the assumption of an early language-specific cue-weighting in the perception of prosodic boundaries.




Adjusting the processing of word prosody and phonemes in the first year of life

Becker, A. 1 , Schild, U. 2 & Friedrich, C. K. 2

1 University of Hamburg
2 University of Tübingen

Even before birth, infants have the capability to perceive some prosodic information, and language - specific prosodic representations of word stress have been shown in infants as young as 4 months (Friederici et al., 2007). This is well before the first language - specific representation for phonemes emerge at approximately 6 month (vowels, Kuhl et al., 1992). Together these findings suggest independent shaping of pathways related to the processing of prosodic information on the one hand and to the processing of phoneme information on the other. Using auditory word onset priming, we follow the neural processing of word prosody and phonemes from 3 to 9 months after birth. Spoken word onsets (primes) were followed by spoken words (targets). Phoneme and prosody overlap between the primes and the onset syllable of the targets were varied across four conditions: (i) ?phoneme-match, prosody match? (e.g. MA - MAma, Engl. mommy [capital letters indicate stressed syllables); (ii) ?phoneme-match, prosody-mismatch? (e.g., ma- MAma); (iii) ?phoneme-mismatch, prosody-match? (e.g.,SO - MAma); and (iv) ?phoneme-mismatch, prosody-mismatch? (e.g., so-MAma). Event-related potentials (ERPs) were recorded from 3-, 6- and 9-month-olds. We found phoneme priming in all three groups. By contrast, stress priming was seen in the 3-month-olds and in the 9-month-olds, but not in 6-month-olds. That is, language processing appears to focus on phonemes in 6-month-olds by neglecting prosody information. This coincides with the milestone of acquiring language - specific phoneme representations in infancy at 6 month after birth. In 3-month-olds and 9-months-olds phoneme and stress priming did not interact. This reveals independent processing pathways for both types of phonological information. In sum, it appears, that prosody and phoneme processing pathways develop differently and remain independent in the first year of life.




Rhythmic cues in infant-directed speech: A test of two hypotheses

Kitamura, C. 1 , Burnham, D. 1 , Lee, C. 2 & Todd, N. 3

1 MARCS Institute, University of Western Sydney
2 University of London
3 University of Manchester

Infant-directed speech (IDS) is used in most languages and cultures. Compared to adult-directed speech (ADS), it is characterised by a range of exaggerated prosodic and phonetic features, including higher pitch and wider pitch excursions, slower tempo and hyperarticulated vowels. These features not only assist infants in many of the challenges of early speech perception but also serve an attentional/affective function in mother-infant interactions. The goal of this study was to examine rhythmic cues in IDS using a corpus of Australian English mothers speaking to their infants at birth, 3, 6, 9 and 12 months, and to another adult. Two alternate hypotheses were tested. According to the exaggeration hypothesis, strong-weak stress patterns in English will be exaggerated in IDS for didactic purposes, e.g., word segmentation. Alternatively, according to the developmental hypothesis, stress-based rhythmic cues (as found in English) will be less evident in IDS than ADS, but become more evident as the infant develops. The hypotheses were tested with two methods of rhythmic analysis, one based on a phonetic model, which focuses on measures of durational variability of vocalic and consonantal intervals, e.g., ?V, ?C, nPVI-V rPVI-C; and the other method based on prominence-based theory, which claims the fundamental determinant of a language?s rhythmic structure is variability in prominence of its vocalic/sonorant segments, and focuses on measures such as ?Pson, ?Psyll, rPVI-Pson, rPVI-Psyll. Neither phonetic nor prominence-based models supported the exaggeration hypothesis. According to the phonetic model, there were no differences between IDS and ADS. However, the results of the prominence-based analyses supported the developmental hypothesis and revealed that compared to ADS, suprasegmental cues to stress are reduced in IDS to very young infants but increase from birth to 12 months. We suggest this is due to the overriding importance of the affective/social and mood-regulation roles of IDS in infancy.




The cognate facilitation effect in bilingual and monolingual toddlers

Von Holzen, K. 1 , Fennell, C. 2 & Mani, N. 1

1 Georg-August-Universität Göttingen
2 Ottawa University

The purpose of the current study was to explore the cognate facilitation effect (CFE) in a young bilingual population and to examine the effect of increasing phonological overlap on the CFE. Using an intermodal preferential looking task, we examined monolingual (German L1) and bilingual (German L1, English L2) toddlers' recognition of words embedded in both English and German carrier sentences. Target words were: cognate (similar pronunciation across German and English, e.g. Fish-Fisch), similar (sound similar but not phonemically identical, e.g. English- tiger /taIgE/, German - Tiger / ti:gA/), or non-similar (no phonological similarity across languages, e.g. English- bird, German - Vogel). Similar to older bilinguals, we predicted that bilingual toddlers would show a CFE: better recognition of cognate and similar words compared to non-similar words, regardless of test language. If the source of the CFE in bilingual toddlers is the presence of two similar-sounding lexical entries, monolinguals should not show the effect in either language. However, if the CFE can be driven purely by phonological overlap, requiring no second lexicon with corresponding entries, monolingual German toddlers tested in English should show better recognition of cognate, and perhaps similar, words compared to non-similar words, due to phonological overlap with their known language. Our series of experiments demonstrate the CFE in the L1 (German) of bilingual toddlers, but no CFE was found in the L2 (English). Instead, the inhibited response to similar words presented in English may be due to phonological "jitter" in the words' L2 representations (similar-sounding to L1, but not overlapping). Interestingly, when monolingual toddlers were tested in English, an unknown language, they showed facilitation for cognate words, a (pseudo-) CFE. This questions the argument that cognate words have a special distinction in the bilingual lexicon and suggests that phonology plays a greater role than formerly credited.




Early word recognition in sentence context: Two-year-olds' sensitivity to sentence-medial mispronunciations and assimilations

Skoruppa, K. 1 , Mani, N. 2 , Plunkett, K. 3 & Peperkamp, S. 4

1 U. of Essex
2 U. of Gottingen
3 U. of Oxford
4 LSCP, Paris

Infants and toddlers encode words with phonetic detail, as evidenced by their sensitivity to mispronunciations of words. To date, no mispronunciation study has tested the recognition of sentence-medial words, which might be particularly difficult because, firstly, these words are acoustically less salient, and, secondly, they can have phonetic variants due to across-word phonological processes. For instance, French has voice assimilation in obstruent clusters: the final /s/ of 'bus' 'bus' becomes voiced in 'bus direct' ('direct bus'), where it is followed by the voiced obstruent /d/, but not in 'bus marron' ('brown bus'), where it is followed by the sonorant /m/. Therefore, the form bu[z] is a legal variant of the word 'bus' in 'bu[z] direct' but a mispronunciation in 'bu[z] marron'. Using an IPL-paradigm and sentence-medial target words, we measured 24-month-olds' looking times towards two pictures shown side-by-side, one corresponding to the target word and the other depicting an unfamiliar object. In Experiment 1, French toddlers looked more towards the familiar object in the post-naming compared to the pre-naming phase following standard pronunciations (Regarde le bu[s] maintenant 'Look at the bus now'; t(31)=2.21, p<.04) and assimilations (Regarde le bu[z] devant toi 'Look at the bus in front of you'; t(31)=4.02, p<.001), but not following mispronunciations (Regarde le bu[z] maintenant 'Look at the *buz now'; t(31)<1). Experiment 2 shows that compensation for assimilation is language-specific: English toddlers, whose language does not have voice assimilation, looked more towards the familiar object in the post-naming compared to the pre-naming phase following standard pronunciations (Can you find the bu[s] now?; t(30)=4.41, p<.001), but not following mispronunciations (Can you find the bu[z] now?; t(30)=1.71, p=.098) or pseudo-assimilations (Can you find the bu[z] there?; t(30)=1.33, p>.1). Thus, both French and English 24-month-olds are sensitive to sentence-medial voicing mispronunciations, but only the French ones compensate for voice assimilation.




Bilingualism modulates infants' attention to the eyes and mouth of a talking person

Pons, F. 1 , Bosch, L. 1 & Lewkowicz, D. 2

1 Universitat de Barcelona
2 Florida Atlantic University

Previous research has shown a shift in attention from the eyes to the mouth of a talking face between 4 and 8 months of age (Lewkowicz & Hansen-Tift, 2012). Shifting attention to the mouth is correlated with the onset of babbling and it provides 8-month-old infants with direct access to the redundant audiovisual speech cues that can enhance the acquisition of native speech forms. At 12 months attention begins to shift back to the eyes permitting them to access the social cues that are critical in cognitive and communicative development. Compared to monolinguals, infants growing up in bilingual environments may begin to exploit audiovisual speech information earlier in development and also may rely on these cues later into development to help them keep their two language systems separate. Here, we explored whether the developmental trajectory of attentional shifting is also present in bilingual infants. We tested 4-, 8- and 12-month-old bilingual (N=20 per age) and monolingual (N=20 per age) infants. Infants watched a female speaking either in their native or non-native language while we recorded eye-gaze with a Tobii eye tracker. Results showed that 4-month-old monolinguals spent more time looking at the eyes regardless of language but that 4-month-old bilinguals looked less at the eyes and more at the mouth. At 8 months of age, both monolingual and bilingual infants spent more time looking at the mouth. At 12 months, bilingual infants spent more time looking at the mouth in both languages, while monolingual infants only showed this pattern for non-native language. These results reveal a different selective attention pattern in bilinguals. These findings suggest that mouth and lip information provide important speech cues that bilingual infants also use for other purposes than just learning speech sounds.




A longitudinal and multivariate approach to language development: Meta-analysis and new data

Cristia, A. 1 , Seidl, A. 2 & French, B. F. 3

1 Laboratoire de Sciences Cognitives et Psycholinguistique, CNRS, IEC-ENS, EHESS
2 Purdue University
3 Washington State University

An emergent line of work suggests vocabulary size can be predicted from infant speech perception measures. Here, we critique and augment this literature. First, we meta-analyzed 18 studies that linked speech perception before 12 months and vocabulary size. The median effect size was significant for all linguistic levels (sounds r = .35 [confidence interval .22;.47], words r = .28 [.14;.4], and prosody r = .42 [.18;.61]). These effect sizes overlap with those for well-established non-linguistic predictors arising from habituation (r = .45 [.2;.65]), dishabituation (r = .42 [.29;.54]), and rapid auditory processing (r = .54 [.25;.74]). This overlap suggests that infant speech perception tasks are capturing individual variation that is stable enough to result in significant correlations with vocabulary outcomes. However, this may also be interpreted as evidence that all measures are assessing a similar construct, including something as non-specific as `task performance'. We addressed this possibility through a multivariate approach. We gathered multiple measures from 45 infants tested at 5-6.5 months on a linguistic (preference for trochees) and a cognitive task (Visual Recognition Memory), and at 6.5 to 8 months on another linguistic (vowel discrimination) and another cognitive task (A-not-B). If all tasks measure a single performance construct, correlations across all measures should be comparable. This prediction was not met, since only the linguistic tasks showed a significant association (r = .3) and correlations between the linguistic tasks and the cognitive tasks were markedly lower (-.2 to .1). Moreover, these results are most compatible with a view of language development in which advancement in processing of vowels is more independent from cognitive processing than from prosodic knowledge. While we have not yet collected outcome vocabulary measures, these data already suggest that a multivariate approach may greatly inform our understanding of how infants begin to build language.




Bilingual infants show a language dominance effect when perceiving VOT contrasts

Liu, L. & Kager, R.

Utrecht Insititute of Linguistics - OTS, Utrecht University

It remains unclear whether mono- and bilingual infants follow the same developmental trajectories in the first year of life. While language exposure has been shown to impact language development (Hoff, 2006), only few have addressed the issue of dominance or degree of exposure (DoE) within bilingual populations. Ramon-Casas and colleagues (2009) found that sensitivity to vowel substitutions involving the Catalan-specific /e-?/ contrast was positively correlated with the proportion of Catalan in 18-26-month-old Catalan-Spanish toddlers. Garcia-Sierra and colleagues (2011) reported that 10-12-months-old English-Spanish bilingual infants' speech discrimination abilities are related to DoE in both languages. The current study focuses on consonant perception in mono- and bilingual infants', studying the effects of language exposure and dominance. 120 monolingual Dutch and 166 bilingual infants with Dutch as one L1 aged 5-6, 8-9, 11-12 and 14-15 months were tested on their discrimination of a three-way stop contrast along the VOT continuum: prevoiced /ba/, voiceless /pa/, and aspirated /pha/ via an oddity visual habituation paradigm (Houston et al., 2007). The other language of bilingual infants was English, German, Chinese (aspiration contrast /pa-/pha/), French or Spanish (prevoiced contrast /pa-ba/ as in Dutch). Infants were habituated on /pa/, and tested on the habituated /pa/ and new categories /ba/ and /pha/. Results show an initial sensitivity (5-6 months) of all infants to the acoustically salient /pa-pha/ contrast. Monolingual Dutch infants' sensitivity to /pa-ba/ increases whereas sensitivity to /pa-pha/ decreases in the first year of life. Bilingual infants retain their sensitivity to the aspiration contrast only if it occurs in one of their native languages. Besides, they are more sensitive to the contrast of their dominant language. Mono- and bilingual infants' early consonant perception is affected by language exposure. In bilingual infants, language dominance impacts their sensitivity to consonant contrasts, and may influence category formation in the first year.




Same classroom, different experience? Sex differences in preschoolers' numeracy and spatial language

Abad, C. , Odean, R. , Costales, A. , Barriga, T. & Pruden, S. M.

Florida International University

Pre-kindergartners' spatial skills are predicted by the amount of spatial language (e.g., ''big'', ''circle'', ''curvy'') they hear from their primary caregivers across their first four years (Pruden, Levine & Huttenlocher, 2011). The amount of spatial language these primary caregivers produce varies according to child sex, with boys hearing more spatial language than girls and boys producing more spatial language than girls (Pruden & Levine, in preparation). Children, however, spend a considerable amount of time in school settings by age 4. The current study explores the relation between potential sex differences in educators' spatial language use, potential sex differences in children's use of spatial language in the classroom, and children's spatial skills. A sample of 12 pre-kindergarten children (6 boys; 6 girls) wore LENA Digital Language Processors for a total of 4 classroom hours during the 2012-2013 school year. Language samples were coded for spatial language use. Children completed a spatial assessment battery assessing their ability to: (1) rotate shapes and objects (Children's Mental Transformation Task); (2) reconstruct patterns using colored blocks (WPPSI-III's Block Design); (3) make analogies between two pictures depicting spatial information (Spatial Analogies); (4) comprehend words for a variety of spatial concepts (Boehm-3); (5) understand early numeracy (TEMA-3). We hypothesize that the amount of spatial language heard in the classroom will predict children's spatial skills. We predict that boys will hear more spatial language than girls, will produce more spatial language than girls, and will perform significantly better on these spatial tasks than girls. Data collection is ongoing and preliminary data on 6 children will be reported. Findings that reveal differential language input and child performance based on child sex would suggest that girls are at a disadvantage in spatial ability, a skill linked to Science, Technology, Engineering, and Mathematics (STEM) success, before entering kindergarten.




Infants learn from multimodal speech distributions

ter Schure, S.

ACLC, University of Amsterdam

In their first year, infants' perceptual sensitivity to non-native speech sounds decreases, while sensitivity to native speech sounds remains. One mechanism that could explain this development is distributional learning (Maye, Werker & Gerken, 2002): Infants learn to categorize sounds into phonemes on the basis of their relative frequencies in the input. So far, this hypothesis has been investigated only in the context of auditory exposure to the phoneme distribution, and then mostly for consonants. In the current study, we tested whether infants learn a vowel contrast from a multimodal distribution of speech. Visual and auditory instances of a woman saying /fEp/ and /faep/ were manipulated to create an audiovisual continuum of this English vowel contrast. In an eye-tracking experiment we presented Dutch 8-month-old infants with a one-peaked version (midpoints most frequent) or a two-peaked version (endpoints most frequent) of this continuum. After 2.5 minutes, all infants were habituated to one of the training videos. Next, a video from the other side of the continuum was played. Infants in the two-peaked training group looked longer at this switch item compared to a repetition of the habituation item (M 5.2s at switch - 4.6s at same) than infants in the one-peaked group (M 4.6s at switch - 5.5s at same). The effect of training condition on the difference between looking times at the two items was marginally significant (F (1,36) = 3.849, p = 0.058). This finding shows that infants can use the distribution of multimodal information to build non-native vowel categories.




Left-dominant functional networks related to speech processing in the infant brain

Homae, F. 1 , Watanabe, H. 2 & Taga, G. 2

1 Department of Language Sciences, Tokyo Metropolitan University
2 Graduate School of Education, University of Tokyo

The left-hemispheric dominance of speech processing is a prominent characteristic of the human brain. However, the developmental origin of this dominance has never been fully clarified. Here, we used 94-channel near-infrared spectroscopy (NIRS; ETG-7000, Hitachi Medical Corporation, Tokyo, Japan) to measure cortical activation and the temporal correlations between cortical regions in 3-month-old infants (N = 20). We presented Japanese speech sounds (duration: about 4 s) to sleeping infants every 10 or 20 s (63 sentences in total). During the inter-stimulus intervals, no sounds were presented. The averaged time course of oxygenated hemoglobin (oxy-Hb) signals from the onset of stimulus presentation was calculated for each measurement channel in order to determine which cortical regions responsive to speech sounds. Furthermore, we examined the temporal correlations between the oxy-Hb signals with all of the continuous data. We found that the temporal regions of the left and right hemispheres showed significant activation. The frontal, parietal, and occipital regions also showed significant activation in response to speech sounds. Correlation analyses revealed interhemispheric connectivity, especially between homologous regions, as well as intrahemispheric connectivity between adjacent regions and between distant regions in both hemispheres. When we directly compared the correlations of all intrahemispheric pairs between the left and right hemispheres, we found that the correlations between the frontal regions and the temporal regions in the left hemisphere were higher than correlations between the corresponding pairs in the right hemisphere. Our findings demonstrated that both the left and right temporal regions were involved in the processing of speech sounds and that long-range intrahemispheric connectivity was marked in the left hemisphere. The development of frontotemporal connectivity in the left hemisphere might have caused the hemispheric differences in brain function and structure and facilitated language acquisition in infancy.




Infants' perception of intonation categories

Frota, S. , Butler, J. , Correia, S. , Severino, C. & Vigario, M.

Universidade de Lisboa

Little is known about the developmental course of infants' perception of linguistic intonation, as previous studies on pitch contrasts have focused on the acquisition of lexical pitch (lexical pitch accent, as in Japanese, or lexical tone, as in Mandarin). Intonation languages (e.g., English, Portuguese) use pitch height, pitch direction and pitch timing to convey phrasal meanings, like sentence type and pragmatic distinctions. We investigated European Portuguese-learning infants' perception of two native pitch contrasts: the statement/yes-no question distinction, marked by a pitch direction contrast (falling/rising), and the broad/narrow focus distinction, signalled by a pitch timing contrast (early/late fall within the syllable). We asked (i) whether the developmental perceptual trajectory of the two contrast types was similar and (ii) how it related to previous reports on pitch perception by infants learning lexical pitch systems. Using a visual fixation procedure, we tested infants' discrimination of the statement/question contrast at 5-6 months and 8-9 months (Exp1) and of the broad/narrow focus contrast at 6-7 and 11-12 months (Exp2). Results from Exp1 showed successful discrimination for both age groups (n=38), whereas preliminary results from Exp2 suggest that only the older infants were able to discriminate the contrast (n=15). These results are in line with data from Proso-Quest, a parental report for the assessment of prosodic development in Portuguese infants/toddlers, showing an earlier development of question comprehension relative to narrow focus (respectively, 12 and 15 months; percentile 75, n=80). Our experimental findings suggest that the perceptual trajectory of intonation categories depends on the primary cues involved, supporting earlier results that show a protracted development of the perception of timing relative to pitch height/direction (Bion et al. 2011). Furthermore, our findings suggest that perception of intonation categories based on a pitch direction contrast may be as precocious as lexical tone perception (Yeung et al. 2012).




Looking for the bouba-kiki effect in prelexical infants

Mathilde, F. , Alexa, W. , Alexander, M. & Sharon, P.

Laboratoire de Sciences Cognitives et Psycholinguistique (DEC-ENS, EHESS, CNRS), Paris, France

The link between a speech sound and its meaning is supposed to be arbitrary (De Saussure, 1959). However, adults and toddlers systematically associate certain pseudowords, such as ?bouba? and ?kiki?, with round and spiky shapes, respectively (Maurer et al., 2006; Ramachandran & Hubbard, 2001). We investigated whether this ?bouba-kiki effect? is present since the first stages of ontological development (i.e., in pre-lexical infants) or arises later with language experience. To our knowledge, only one study has reported such sound symbolic associations in prelexical infants (4 mo, Ozturk et al., 2012), but the number of infants (N=12) and stimuli (2 combinations of shape-pseudoword) was very small.
Here, we report three experiments with 5- and 6-month-olds that found no bouba-kiki effect at all. In Experiment 1 and 2, French 6-month-old infants (N=24 per Exp.) were presented with 12 different combinations of a shape and multiple tokens of a pseudoword, half congruent (e.g., round shape + /buba/), half incongruent (e.g., round shape + /kike/). Infants did not show any looking time difference for congruent versus incongruent pairings when the shape was still (Experiment 1) or when it was growing/shrinking in synchrony with the presentation of the pseudo-word (Experiment 2). They only looked longer overall at the round shapes (Experiment 1). In Experiment 3, twenty-four five-month-old infants were presented with 28 different combinations of two side-by-side shapes (one round, one spiky), with five repetitions of one pseudoword token. Again, we only found a preference for the round shapes. To conclude, in three experiments using carefully controlled stimuli and different paradigms, we failed to find a bouba-kiki effect in prelexical infants. We argue that the evidence for a bouba-kiki effect in prelexical infants so far is weak and that null results like the present ones should not be kept in a drawer.




Influence of educators' numeracy and spatial language on pre-kindergarteners' numeracy and spatial skills

Abad, C. , Odean, R. , Costales, A. & Pruden, S. M.

Florida International University

Children's numeracy/spatial skills are predictive of success in careers related to Science, Technology, Engineering and Mathematics (STEM). Previous research suggests that the amount of numeracy and spatial language used in the home predicts pre-kindergartners' numeracy/spatial skills (Gunderson & Levine, 2011; Pruden, Levine & Huttenlocher, 2011). Given the substantial amount of time pre-kindergarten children spend in settings outside the home, the present study seeks to understand the role of educators on pre-kindergarten children's numeracy/spatial skills. We examine the quantity of numeracy/spatial language used by educators and how this language use relates to children's numeracy/spatial skills. We recorded 14 pre-kindergarten educators interacting naturally with children in their classroom. Interactions between educators and children were recorded using a Digital Language Processor (DLP; LENA Foundation). Educators wore the DLP for a total of 4 classroom hours during circle time, free play, and math and science curriculum. Transcriptions of educator talk were coded for use of numeracy/spatial language. Children completed a numeracy/spatial assessment battery. This battery assessed children's ability to: (1) rotate shapes and objects (Children's Mental Transformation Task); (2) reconstruct patterns using colored blocks (WPPSI-III's Block Design); (3) make analogies between two pictures depicting spatial information (Spatial Analogies); (4) comprehend words for a variety of spatial concepts (Boehm-3); (5) understand early numeracy (TEMA-3). Our prediction is that educators who utilize more numeracy and spatial talk when teaching math and science curriculum will see increased growth in children's numeracy and spatial skills. Data collection and coding of educator language is ongoing, however preliminary data shows significant correlations between the assessments within the numeracy and spatial battery. Finding a significant relation between educator language use and pre-k children's numeracy and spatial skills is critical to understanding how we can ensure that pre-k children have those early school readiness skills needed for success in the STEM disciplines.




Toddlers' ability to contend with unfamiliar accents

van Heugten, M. 1 & Johnson, E. 2

1 Laboratoire de Sciences Cognitives et Psycholinguistique, CNRS/EHESS/DEC-ENS
2 University of Toronto

Children have been found to struggle with accent-induced differences in the realization of newly-learned words until after their second birthday (Schmale et al., 2011). Here, we examine whether toddlers, like adults, recognize known words in drastically distinct unfamiliar accents and whether brief accent exposure facilitates word recognition. Using the Preferential Looking Procedure, Canadian-English-learning 28-month-olds in Experiment 1 were presented with pictures of two objects on a TV screen, one of which was named in a Scottish accent (e.g., Look at the cow!). This test phase was preceded by a 2-minute exposure phase featuring either the same Scottish-accented or an Australian-accented speaker. Results showed that the proportion of fixations toward the target pictures following target word onset (.67) reliably exceeded chance level. Surprisingly, this held regardless of whether the preceding story was read in Scottish- or Australian-accented English (p=.838), suggesting that 28-month-olds deal with accent-related variability 'on the fly' when words occur in sentence frames. Experiment 2 subsequently examined whether exposure to the target accent would aid children under more challenging listening conditions. To increase difficulty, Scottish-accented words were presented in isolation. As before, exposure was provided either to the Scottish or to an Australian speaker. Overall target word recognition (.60) was indeed lower than in Experiment 1 (p=.038), but was nonetheless unaffected by the storybook reader (p=.977). In a follow-up experiment, a Canadian-accented storybook reader led to similar levels of word recognition in Scottish-accented English (.57), indicating that toddlers had not simply become more tolerant of acoustic deviation, but rather recognized the target words even without accent exposure. Taken together, this study shows that toddlers recognize known words in unfamiliar regional accents and do so most efficiently when words are presented in sentence frames. By 28 months of age, children thus readily contend with accent variation during spoken word recognition.




Phonological competition effects for known words: Evidence from Dutch 18-month-olds

Junge, C. 1 , Benders, T. 2 & Levelt, C. 3

1 University of Amsterdam, Amsterdam, the Netherlands
2 Radboud University Nijmegen, Nijmegen, the Netherlands
3 Leiden University, Leiden, the Netherlands

Lexical neighbors are words that differ in one phoneme (e.g., 'pear'-'bear'). Infants have difficulties learning novel words that are minimal pairs (i.e., 'bin'-'din', Stager & Werker, 1997; Nazzi, 2005) or are lexical neighbors of familiar words (i.e., novel 'tog' - familiar 'dog', Swingley & Aslin, 2007). We do not know yet whether infants, like adults (Allopenna, Magnuson & Tanenhaus, 1998), find it difficult to recognize words in the presence of lexical neighbors. This study examines whether and how infant word recognition is affected by having a potential target that is a lexical neighbor of the actual target. We tested Dutch 18-month-olds in a cross-modal preferential-looking task, since in Dutch most toddlers understand two minimal-pair triplets: 'hand'-'hond'-'mond' (hand-dog-mouth) and 'bed'-'bad'-'bal' (bed-bath-ball; Junge, Cutler & Hagoort, 2012). This allowed us to test word recognition of these particular items when a phonological neighbor was present or not. Preliminary results (we coded 18/40 infants) showed that:1) Infants looked shorter at targets when the distracter was a neighbor rather than a non-neighbor (t[17]=2.47, p=.024); nevertheless, even with lexical neighbors, word recognition was significantly different from chance (t[17]=4.55, p<.001). 2) When the two pictures were lexical neighbors, infants had the weakest recognition when the disambiguating point was in the onset ('hond' vs. 'mond'), intermediate recognition for nucleus neighbors ('hond' vs. 'hand'), and strong recognition for coda neighbors ('bal' vs. 'bad'; F[1,49]=3.83, p=0.056). 3) When infants heard a non-present target ('hond') and saw its two lexical neighbors ('mond' and 'hand'), they preferred the target with the same vowel (i.e. 'mond'; t[17]=3.63, p=.002). Together, these results provide strong evidence that infants with small lexicons can recognize words in the presence of a lexical neighbor. However, recognition is hampered by the presence of a lexical neighbor, especially when the disambiguation point occurs earlier in the word.




Weighing up predictors of early vocabulary development: The role of babble, pointing and maternal education

McGillion, M. 1 , Herbert, J. 1 , Pine, J. 2 , Keren-Portnoy, T. 3 , Vihman, M. 3 & Matthews, D. 1

1 University of Sheffield
2 University of Liverpool
3 University of York

The transition to conventional language in the second year of life forms a cornerstone of development that social interactions and future academic achievements can build on. Understandably, therefore, researchers have attempted to establish which factors can predict the substantial and persistent individual differences that are observed in early vocabulary development. However, until now, different strands of developmental research have tended to focus on one type of factor or another in isolation, despite calls for a more integrated approach to the study of early word learning (Hall & Waxman, 2004). The present study considered three key predictors of conventional word learning simultaneously: onset of canonical babble, onset of pointing and parental education. We aimed to measure and weigh up these predictors to explore for the first time how they related to one another and to determine which best predicted subsequent word learning. Drawing on an existing longitudinal dataset of naturalistic video-recorded dyadic interaction, we coded for the mother?s level of education, the onset of babble (two stable consonants) and the onset of index finger pointing on a single sample of 46 infants. A parental report, the MacArthur Bates Communicative Development Inventory, was used to measure the infant?s expressive and receptive vocabulary knowledge at 18 months. Babble onset was not related to pointing onset or maternal education. However, infant pointing was moderately correlated with maternal education (r=0.35, p<0.05). Furthermore regression analyses revealed that pointing onset was a significant predictor of receptive vocabulary whereas babble onset was a significant predictor of expressive vocabulary at 18 months. Maternal education was a significant predictor of both vocabulary outcome measures. These findings highlight how pre-linguistic vocal and gestural abilities, while often produced in an integrated fashion early on, are not synchronised and moreover make independent but equal contributions to word learning.




Simulating infants' recognition of their own name: The role of past experience

Bergmann, C. 1, 2 , ten Bosch, L. 1 & Boves, L. 1

1 Centre for Language Studies, Radboud University Nijmegen, The Netherlands
2 International Max Planck Research School for Language Sciences

Past experience shapes infant language acquisition and plays a decisive role for performance during tests in the lab, most crucially tasks which investigate recognition of putatively well-known words. Infants show that they can indeed recognise highly frequent words from their daily experience, most prominently their own name, early on and distinguish them from matched foils. However, it is not entirely clear which factors influence performance and which aspects of past experience infants benefit from. Using computational modelling, we simulate a word-recognition task with explicit reliance on previous encounters. The computational model used in this study assumes that infants bootstrap their language acquisition using only general-purpose perceptual and learning procedures. Specifically, the model does not rely on symbolic representations of linguistic units such as words or phonemes. Learning is incremental, i.e., internal representations are updated after each new utterance. Input is presented as real speech. In our study, we manipulate two factors that have been suggested to crucially shape infants' recognition abilities: The amount of experience (word frequency) and whether a number of different voices or just one speaker uttered a word (occasional variability). Word frequency has been assumed to aid recognition, but studies using high-frequency words encountered large between-participant variability pointing to a high degree of individual differences. Variability of speakers has been suggested to aid word learning, as it helps infants pay attention to the crucial aspects of the input signal. However, in their daily life infants with one main caregiver only occasionally encounter different voices. Our modelling results show that frequency, but not occasional speaker variability, improves recognition. By examining the impact of these two factors on modelled infant recognition performance in isolation, we can make predictions for future lab studies that will reveal how much variability is beneficial and to what degree new voices help word learning.




Learning language's abstract and rule-like structure

Willits, J.

Indiana University

Learning to represent language's hierarchical structure and its nonadjacent dependencies is thought to be difficult for association-based mechanisms. Most notably, it is argued that they have extreme difficulty learning some of language's abstract and rule-like relations. In the following work, I present two simulations of language learning using a simple recurrent network (SRN), demonstrating that SRNs are capable of learning abstract and rule-like knowledge. In Simulation 1, I show that SRNs can learn distance-invariant representations of nonadjacent dependencies when they experience those dependencies under variable conditions. For example, SRNs trained that A predicts B consistently at a distance of 3 (e.g. A-x1-x2-B), don't easily transfer their A-B knowledge to other distances (e.g. A-x1-x2-x3-B). However, SRNs that experiences distance variability (A-x1-B, A-x1-x2-B) easily transfer their expectation of A predicting B to distances they have not seen. The fact that SRNs can learn distance invariant relations is evidence that association-based mechanisms capture this important property of natural language. These results are also consistent with broad evidence that variability is useful in language acquisition. In Simulation 2, I show that SRNs can learn abstract rule-like relationships. Based on experiments with 7-month-old infants, Marcus (2000) argued that connectionist networks are fundamentally incapable of learning abstract, rule-like knowledge. Contra Marcus's claims, I will show that SRNs (even purely localist ones that do not represent microfeatural information about phonology or semantics) can learn arbitrary, abstract, and rule-like knowledge, as long as improper assumptions are not built into the model. Together, these simulations show that (contrary to previous claims) SRNs are capable of learning abstract and rule-like nonadjacent dependencies. The studies refute the claim that neural networks and other associative models are fundamentally incapable of representing hierarchical structure, and show how recurrent networks can provide insight about principles underlying human learning and the representation of language.




Letting clusters and paths emerge from early semantic hypernetwork structure of features and their nouns

Maouene, M. 1 , Maouene, J. 2 & Canada, K. 2, 2

1 Ecole Nationale des Sciences Appliquees, Tanger
2 Grand Valley State University

The shared features that characterize the noun categories that young children first learn are a formative basis of the human category system. Recently, Hills and colleagues (2009 a,b) described the potential categorical information contained in the features of early-learned nouns by examining the binary graph-theoretic properties of developing noun-feature networks with a deterministic method: the clique percolation. The networks were built from the overlap of perceptual and functional features for words normatively acquired by children at three different ages: 20 months --21 nouns--, 25 months --56 nouns-- and 30 months -- 130 nouns--. The resulting networks had small-world structures, indicating a high degree of feature overlap in local clusters. Results also suggested that overlapping features among these nouns created higher-order groupings common to adult taxonomic designations and ad hoc categories. However, these methods are limited as they are only descriptive and yield minimal semantic information such as the degree of connectivity of local structures, whether an unspecified link of similarity exists, or identifying cliques of connectivity. To account for these limitations, we present a different type of representation, the hypernetwork (Berge, 1956), which includes semantics, and a different formalism, the Formal concept analysis (FCA), a non-deterministic method that builds relationships of containment (Wille, 1984). Further, machine-learning algorithims automatically cluster and build inclusions for the features and their nouns at the three ages mentioned above. We compare our results to the results obtained by Hills and colleagues. The power of the system lies in its automaticity and ability to form many intermediate clusters at all stages of growth of the network in addition to showing the structure's emerging paths. The results offer new and testable hypotheses on the role of shared features in the emergence of meaningful pathways within local structures, fundamental in categorizing systems.




Referential expectation in infancy

Marno, H. 1 , Farroni, T. 2 & Mehler, J. 1

1 Language, Cognition and Development Lab, SISSA, Trieste, Italy
2 DPSS, Universita degli Studi di Padova, Padova, Italy

Human language is a special auditory stimulus and infants immediately after their birth are equipped to acquire it in a very fast way. Indeed, there is evidence that already newborns are able to distinguish languages they never heard before, based on their rhythmical characteristics (Mehler et al., 1988; Nazzi et al., 1998; Ramus et al., 1999, 2000), to detect acoustic cues that signal word boundaries (Christophe et al., 1994), to discriminate words based on their patterns of lexical stress (Sansavini et al., 1997) and to distinguish content words from function words by detecting their different acoustic characteristics (Shi et al., 1999). Moreover, they are also able to recognize words with the same vowels after a 2 min delay (Benavides-Varela et al., 2012). In sum, there is great evidence that infants are born with a unique sensitivity to process language, but from when they start to understand that language is a referential symbol system and that words refer to entities in the world, is still unknown. In the present study we addressed this questions. Fifteen, 4-months old infants were shown videos of a female face, who was either talking in a normal way, or in a backward way, or she was silently moving her lips. After each movie the face disappeared and either on the left side or on the right side of the screen an object appeared. Preliminary results showed that infants? looked faster at the object in the normal speech condition than in the backward speech and silent condition. Thus, these results support the hypothesis that infants do not only possess great speech-processing abilities, but they also have referential expectations about language, and in the presence of speech they are ready to search for possible referents, at least from 4 months age old.




Contingencies between verbs, body parts, and argument structures in maternal and child speech: A corpus study in Telugu

Latha Maganti, M. 1 , Maouene, J. 2 , Witherspoon, T. 2 , Collinge, A. 2 , Notter, R. 2 & Nesheim, M. 2

1 University of Hyderabad
2 Grand Valley State University

Many theorists of grammar and verb learning, studying English, have insisted that the abstract and relational nature of verbs and syntax is too hard for children to acquire using observational cues, and that children thus learn the grammar of verbs from the number of arguments it occurs with and/or the probability of their appearance in a particular frame. However, theorists studying other languages, notably languages with massive argument ellipses, have understandably had issues with this perspective (Rispoli, 1995[Japanese]; Narashiman, Budwig, and Murty, 2005 [Hindi]). Recently, growing evidence suggests a strong link between verbs and the neural processes that underlie body movement and perception in adults and children. These findings link bodily effectors to verbs via a concrete core meaning (e.g., jumping is about LEG). Here we build upon these relationships and show connections between body regions, verb meanings, and syntax by using new evidence from a corpus study of infant and maternal speech in Telugu, a Dravidian language from the South of India, which is known for dropping arguments. We ask whether the verbs used in nine common syntactic frames are specifically linked to one of three main regions of the body: HEAD, ARM, LEG. The speech of 18 to 36-month-olds (n=18, 2 groups of 9 children) and their mothers (n=18) as well as other member of the household (n=18) was examined for the use of 78 early-learned verbs. In total, 6907 utterances were hand-coded for their associations with the HAND, ARM, and LEG regions. The associations were provided by 45 4.5 to 5.5 year-old native Telugu speakers. Significant non-random relations were found both overall and for each age group using correspondence analyses, Fisher?s exact tests, and multiple one-sample chi-square tests. The results are discussed in terms of their relevance for both argument structure development and embodied cognition.




Early sensitivity to morpho-phonological alternations: A cross-linguistic study

Buckler, H. 1, 2 & Fikkert, P. 1

1 Radboud University Nijmegen
2 International Max Planck Research School for Language Sciences

Preverbal infants do not perceive ham and hamlet as related words (Jusczyk et al., 1999) but they can use the distribution and frequency of inflectional affixes to posit links between stems and inflected forms (Marquis & Shi, 2012). Affixation often involves more than just segment concatenation and may trigger alternations within morphological paradigms. Intraparadigmatic voicing alternations occur in Dutch and German due to final devoicing. This phonotactic constraint prohibits voiced obstruents word-finally, however, when followed by a vowel-initial suffix voicing is permitted, e.g. Dutch be[t]-be[dd]en 'bed(s)'. This study, using the head-turn preference procedure, investigated whether 9-month-olds can incorporate their morphological and phonotactic knowledge and assign bare roots and inflected forms to the same lexical entry when there is suffixation and voicing alternation. Testing Dutch and German infants enables us to address the contribution of language-specific factors on the acquisition of voicing alternations, e.g. functional load of voicing and frequency of alternations. Infants were familiarised on passages containing monosyllabic, singular nouns and tested on lists of bisyllabic, plural forms. Experiment 1 involved suffixation only (dot-dotten). Both Dutch and German infants succeeded in relating forms to the same lexical entry; orientation times to familiar and novel items differed significantly. This early sensitivity to inflectional affixation has not previously been attested in these languages, and not at this young age. Experiment 2 introduced voicing alternations as well as suffixation (mut-mudden). Here neither group perceived links between singular-plural pairs when there was a voicing alternation present. Despite differences in frequency of alternations and the importance of the voicing contrast in the language being acquired (both of which may aid acquisition) 9-month-olds are not yet aware that a morpheme may have more than one surface form, seemingly treating [t] and [d] as contrastive phonemes that map to separate lexical entries, even when this is incorrect.




Sentential codeswitching in highly proficient Spanish-English bilinguals

Litcofsky, K. & van Hell, J.

Penn State University

Children who grow up speaking a second language (L2) as a heritage language often produce code-mixed utterances, which contain a mixture of words and phrases from both languages. As more proficient adults, bilinguals also produce utterances containing both languages, but which follow systematic patterns. While this codeswitching appears fluent, psycholinguistic and neurocognitive research has shown that switching between languages incurs a processing cost in both production and comprehension. However, the majority of studies examined language switching between isolated items, and little is known about the processing of codeswitches in sentence context. We investigated sentential codeswitching in Spanish-English bilinguals living in an L2 English environment, who codeswitch frequently in their daily life. Stimuli were 160 sentences that began in Spanish or English and could contain a codeswitch into the other language or not. All sentences were semantically and grammatically correct. Codeswitching was examined behaviorally, using self-paced reading, and with event-related potentials (ERPs). Preliminary results show that codeswitched words, as compared to non-switched words, are read more slowly and evoke an N400, and that these switch costs are larger when switching into the non-dominant language. Results will be discussed in terms of previous research and models of bilingual language processing, as well as how the nature of the switch cost may be related to proficiency in each of a bilingual's languages.




Autosegmental phonology and early literacy: Multilinear model and geminates between emergent and conventional writing

Ruvoletto, S.

University of Paris 8

In the studies concerning early literacy, the researchers found out two principles that are adopted by
children, irrespective of their mother tongue, to judge if a sequence of characters can be read or not (Ferreiro, Teberowsky 1979). The theorical position suports on this study is based on the « internal variety principle » which say that children, before schooling, identify a written word as a sequence of characters which are different one to each others. This suppose they don't have a representation for double symbols. However in the Italian written system, duplication of consonant letters (called geminates) occurs very frequently (e.g. ?zz? in pizza). How could Italian children deal with them? Participants: 80 Italian children enrolled in kindergarten or first graduate were subdivided in 3 groups according to their age and scholarization. I classified their phonological competences by the CMF standardized test (Vicari, Trasciani & Marrotta, 2008) and their writing skills by the writing of 8 words. Methods: Children were tested one by one in a semi-structured interview through 3 tasks. 1) Phonetic segmentation with pictures 2) Phonetic segmentation with written words 3) Choice between minimal pairs with or without geminates. Results: Thanks to task 3, I showed that most of the Italian children between 4 and 5 years old don't accept words with geminates, according to the internal variability principle. Comparing the results of tasks 1 and 2, I can say that Italian children's phonology representation is organized in differnt tiers (syllabic structures and temporal units) which are developing in different moments of the language acquisition. The components of all the tiers are acquired during the phonological stage (4/5 years of age), whereas the integration of the information represented in the tiers of the temporal units takes place only when the learning of writing starts (6/7years of age).




Children's development of vocabulary and MLU and input frequency effects in the whole CHILDES data base

Lee, S. 1 , Jun, J. 2 , Min, M. 2 & Suh, J. 2

1 Cyber Hankuk University of Foreign Studies
2 Hankuk University of Foreign Studies

The study deals with the whole data set of the corpus of typically-developing English children in CHILDES database including a total of 8,042 transcripts. The purpose of this study is to reorganize the data set in order to create a useful way of using the big data and to reexamine the findings of previous studies regarding English children's acquisition of vocabulary and inflectional morphemes with the increased statistical power. The first stage of the project was to rearrange the whole data set by children's age and to find children's developmental pattern of vocabulary by counting word frequency, inflected word frequency, type/token ratios, etc. and syntactic development by calculating MLU by age and compare the results with mother's in order to find out a possible input frequency effect. The data were also compared between the US and the UK children and mothers. The 8,042 transcripts included 2,272 UK transcripts with 1,355 morphologically tagged, and 5,770 USA transcripts with 2,363 morphologically tagged from 265 one-year olds, 406 two-year olds, 371 three-year olds, 228 four-year olds, 162 five-year olds, 95 six-year olds, 103 seven-year olds, 59 eight-year olds, 40 nine-year olds, 58 ten-year olds, 8 eleven-year olds, one 12-year-old one, and one 16-year-old one. The findings were (i) Type/token ratio increased by children?s age and the similar pattern was also found with mothers; (ii) There was an overlap between children's and mothers? data in terms of the most frequent words and inflected words; and (iii) Children's MLUs increased till age 6 (with 5.65) whereas mother?s MLU started from 4.40, increasing till age 4 (with 5.40), and UK mothers's and children's MLUs were slightly shorter than the US mothers' and children's at each age. The findings suggest a possible influence of input frequency on the children's acquisition of vocabulary and MLU development.




Rule learning in infants, adults, and zebra finches with human speech and birdsong stimuli

Geambasu, A. 1 , Spierings, M. 2 , Levelt, C. 1 & ten Cate, C. 2

1 Leiden University Centre for Linguistics
2 Institute Biology Leiden

Marcus et al. (1999) have shown that 7 month old infants can be trained on an ABA, ABB, or AAB grammar, generalize the grammar to novel input, and discriminate it from the non-training grammars. We performed a series of carefully constructed comparable experiments, to examine whether 6 and 9 month old infants, adults, and zebra finches are able to perform this task. We used both naturally-recorded female and male speech (syllables) and zebra finch song elements in all three populations. We compared the ability to generalize training grammar AAB or ABA, and to discriminate it from the non-training grammar. Preliminary data shows that human adults perform at ceiling in a Go/No-go paradigm with human speech elements. While zebra finches are able to discriminate the two grammars with both human and birdsong stimuli after training on the Go/No-go paradigm, the speech stimuli seems to be more difficult for them to learn. In addition, as a group they are not able to generalize the training grammar to novel input, although individual zebra finches seem to be showing evidence of generalize (cf. van Heijningen et al., 2012). Preliminary infant data from a Preferential Looking task suggests that this paradigm is appropriate for performing a grammar discrimination experiment, and that the older infants are better able to perform this task than the younger. We will present data on both species that points to a species specific, yet developmentally dependent, ability to generalize grammatical rules to novel input in the auditory domain.




Individual variation of word acquisition age: A comparison of Japanese- and English-speaking infants

Sugiyama, H. , Kobayashi, T. & Minami, Y.

NTT Communication Science Labs

Previous studies have argued that infant's vocabulary growth differs among individual infants due to variation of parents' inputs (e.g., Tardif 1996). However, some words may be acquired at particular timings according to infants' livelihood and/or physical development since their parents often use such words (e.g., body parts and games/routines) at the similar timings. This predicts that individual variations of word acquisition age relates to the degree to universal properties beyond individual, cultural, and linguistic factors; i.e., word acquisition age correlates with individual variation of word acquisition age. We examine this prediction using Japanese and English vocabulary growth data. Using our Japanese MacArthur CDI database (1,699 participants) with English Lex2005 CDI database (Dale et al., 1996), we analyzed 154 words that had a clear correspondence between Japanese and English, and acquisition proportion at 30 month was over 60%. We calculated two parameters for each word: 'acquisition age' defined as a date when acquisition proportion is 50%, and 'individual variation' of acquisition age defined as the inverse of highest derivation value of the logistic function. Our analysis shows that the acquisition age strongly correlates with the individual variation in the both languages (J: r=0.83*, E: r=0.74*; *=p<.01); in contrast, the individual variation between the languages are weakly correlated (r=0.28*). A more precise analysis with between-language comparisons shows little difference in individual variation of words involving 'body parts' and 'games/routines' category, suggesting that the individual variation may be due to culture-general livelihood and/or infants' physical development. Our analysis also provides cross-linguistic inconsistency: in Japanese, the words with small individual variation include personal belongings such as 'furniture and rooms' and 'outside things and places to go' categories; in English, such words include common nouns like 'car' and 'ball'. This may reflect a cultural difference in parents' educational policy (e.g., enjoying vs. teaching).




Dynamic analyses of Mandarin: The effect of gestures and tones

Liu, Y. 1 , Huang, Z. 1 & Chen, J. 2

1 Department of Athletic Performance, National Taiwan Normal University
2 Department of Chinese as a Second Language, National Taiwan Normal University

Motor control plays an important role in speech production. Using a reiterant speech paradigm, Kelso et al. (1985) proposed a dynamic model of speech production where the estimated stiffness value was identified as the tuning parameter for different gestures and prosodic conditions. The tones in Mandarin are the unique feature that helps identifying words and understanding meanings. The purpose of the study was to explore the relations between the stiffness values and the different gestures, tonal conditions in spoken Mandarin. Six university students and 2 pre-school children who were native speakers of Taiwanese Mandarin performed 16 reiterant speech tasks. Each task was made up of 4 or 5 sets of disyllabic words where all the syllables were replaced with /ba/. A 200 fps high speed camera and a digitizing system were used to capture the kinematics of the lips movements, and the amplitude and the peak velocity of the lip movements were derived to approximate the stiffness of the movement system while producing the rhythmic speech tasks. The results show that the average value of stiffness is higher for the open gestures than the close gestures, but the coefficient of variation of the stiffness value was greater for the close gesture. The children share the similar trend with a lower stiffness values in the open gestures. No consistent trend for different tonal conditions is observed. These results provide evidence to support the dynamic model of speech production.




Influence of predominance in noun learning examined by period from comprehending to producing words: A cross-linguistic statistical investigation using CDI

Minami, Y. & Kobayashi, T.

NTT

Previous studies have found that children?s noun learning predominates over their verb learning (Gentner et al. 1982, 1988, 2002; Maguire et al., 2006), since across several languages, children?s early productive nouns appear earlier than do verbs. However, no study has statistically investigated which learning process significantly contributes to this noun-learning predomination. Focusing on the process between comprehending and producing a word, this study investigates how this process cross-linguistically affects learning-speed differences between verbs and nouns using English and Spanish Lex2005 CDI database (Dale et al., 1996) with our Japanese CDI database (1,699 toddlers from 10 to 32 months). We defined the word-comprehension and -production days as the days when 50% of the children comprehend and produce the word, respectively. These days were determined by approximating the word comprehension and production rate curves by two logistic functions, setting the functions to 0.5 and solving them by the Newton method. The differences in word-comprehension day between verbs and nouns (verb-comprehension day - noun-comprehension day) were -24 days (English), -27 days (Japanese) and -63 days (Spanish) (p<0.05). The word-production day differences between verbs and nouns (verb-production day - noun-production day) were 58 days, 33 days and 55 days, respectively (p<0.01).Since these absolute values are strongly affected by word selection in CDI, we investigated relative differences between comprehension and production days. We found that the differences drastically increase from comprehension to production. This result shows that the process occurring between word comprehension and production affects the increases. Therefore, we directly calculated the period between comprehension and production, finding that these periods for English, Japanese and Spanish were 82 days, 60 days and 118 days, respectively. This shows that the process between comprehension and production of a word is a significant factor of predominance in noun learning by children.




Using syllables as a treatment unit for dyslexic children's writing skills

Loury, F. , Simoës-Perlant, A. & Soum-Favaro, C.

Université Toulouse II-Le Mirail (laboratoire Octogone-Lordat)

The goal of this study is to understand mental representations and cognitive mechanisms involved in writing. More precisely, the role of syllables in children's segmentation process of oral forms in dictation tasks will be assessed. The effects of syllables have been debated, primarily as far as reading: effects of syllabic frequency (Magnan & Ecalle, 1998) and of syllabic structure, both in terms of congruence (Cole et al, 1999) and syllabic complexity (Sprenger - Charolles & Siegel, 1997). While writing under dictation, Soum et al. (in press) revealed that the context of liaison causes written segmentation mistakes for children in cycle 2 due to syllabic misalignment: liaisons lead to disassociation between word boundaries and syllable boundaries (e.g., "un petit tunivers" instead of "un petit univers"). Small work has been done on written language disorders and access to syllables. However, it appears that as far as reading, even if dyslexic children have difficulties accessing phonetic representations of language, they are sensitive syllabic structures and frequency (Maïonchi-Pino, 2008). This study considers writing, by asking thirty children with developmental dyslexia a dictation task. Forty eight experimental stimuli were divided into three blocs corresponding to the three most common consonants in liaison: /n/, /z/ and /t/. For each consonant, there are two modes: in a liaison context (e.g., un petit univers) or at the beginning of a word (e.g., un petit tunnel). The children were to write each stimulus they heard on a page of a notebook. Each child would hear the stimulus twice before writing. If dyslexic children align the syllable with the beginning of the word to segment the oral extract, then liaisons should hinder lexical access and the results would demonstrate that syllables are relevant treatment language units. The results are currently being analyzed.




Spanish Differential Object Marking in early bilinguals

Ticio, M. E.

Syracuse University

Spanish grammatically marks a subset of its accusative objects (Differential Object Marking, DOM), while English does not. The distribution of Spanish DOM is determined by many factors, including the definiteness, agentivity, affectedness or animacy of the object, and the lexical semantics of verb (Torrego (1998), Rodríguez-Mondoñedo (2008), among others). DOM?s complexity has accounted for the difficulties that L2 learners and adult bilinguals display in its acquisition (Martoccio (2012)); contrasting with the flawless accuracy of Spanish monolingual children (L1) acquiring DOM (Rodríguez-Mondoñedo (2008), Montrul (2011)). This study examines the acquisition of DOM in the spontaneous production of early Spanish-English simultaneous bilinguals (2L1). The data comes from longitudinal databases (127 files) of seven Spanish-English 2L1 (Age range: 1;1-3;6,; MLUw range: 0-5) with different linguistic environments (i.e., majority language). The results show that 2L1 in the group have a later emergence age of DOM, which ranges from 2;2 to 2;11 (cf. 1;09-2;04 in L1, Rodríguez-Mondoñedo (2008)); a low accuracy rate of object marking, which averages 27% in 2L1; and a lack of commission errors, while commission and omission errors are evenly distributed in L1, cf. Rodríguez-Mondoñedo (2008)). In addition, these results suggest a connection between emergence of dative constructions and DOM, with all the subjects producing dative constructions prior to DOM. Overall, the 2L1 were closer to L2 learners in their results, which supports the conclusion that the 2L1 have not acquired DOM in Spanish in the period examined. To the extent that it is correct, this study empirically documents that, under reduced input conditions, 2L1s develop core aspects of their language, but their grammatical systems show a marked tendency toward simplification. This simplification precludes them from completely acquiring language-specific properties, such as DOM in Spanish, and leaves them with incomplete grammars (Sorace (2005), Montrul (2008), Pires & Rothman (2009)).




Vowels and consonants at birth: A NIRS study

Bouchon, C. 1, 2, 3 , Nazzi, T. 1, 2, 3 & Gervain, J. 1, 2, 3

1 University Paris Descartes ? Laboratoire de Psychologie de la Perception UMR 8158
2 CNRS ? UMR 8158
3 LabEx Empirical Foundations of Linguistics

Consonants (Cs) carry more information at the lexical level, and vowels (Vs) at the morphosyntactic level (Nespor, et al. 2003). This ?division of labour? would constitute a significant learning bias towards lexical and morphosyntactic acquisition. While adults use these biases, infants? data are more mixed but a consonantal bias has been observed at 12 and 8 months (Hochmann et al., 2011; Nishibayashi & Nazzi, 2012). However, in a lexical task, 6?month?olds exhibited a vocalic bias (Hochmann, 2010) suggesting that a certain amount of speech exposure is necessary before infants use the C/V functional asymmetry. This study investigates the origins of the C/V functional asymmetry through rule detection at birth (a proxy for syntax), by measuring newborns? brain responses with NIRS, when exposed to a speech signal carrying a repetition of Vs or Cs. The extraction and generalization of a repetition pattern carried by syllables (ABB vs. ABC sequences) in CVCVCVs sequences are crucial for rule learning, and present at 7.5 months (Marcus et al., 1999). Precursors of this mechanism are present at birth (Gervain et al. 2008; 2011). The stimuli consisted of CVCVCVs implementing 3 different rules: ABC, (e.g. ?mulevi?); ABBc, (e.g. ?muleli?); ABBv, (e.g. ?muleve?). The division of labor predicts that if presented in a blocked?design with blocks of different sequences (6 items) the elicited rule extraction process would favor the ABBv rule over other rules. Results of 24 newborns showed that ROIs (fronto?temporal and parieto?temporal) in both hemispheres are differently involved depending on the condition. ABBv evoked a stronger left response in fronto? and superiortemporal channels, suggesting a perceptual bias towards the detection of repetition carried by Vs vs. Cs. We are currently conducting additional experiments to further explore whether these results reflect the origins of the C/V asymmetry.




Development of short term memory in Specific Language Impairment

Yague, E. & Torrens, V.

Facultad de Psicologia, Universidad Nacional de Educación a Distancia

Specific Language Impairment is a delay in the onset of language once it has been acquired in the absence of any neurological, cognitive, or psychological difficulties. This disorder covers a wide range of abnormal development involving phonology, lexicon, morphology and syntax. A question that researchers have been trying to answer is whether the different symptoms found in SLI are due to the low capacity of phonological short term memory (Edwards & Lahey 1998, Ellis Weismer et al. 2000, Gathercole & Baddeley 1990). In order to test this capacity we have applied a test of repetition on nonwords with high and low frequency syllables (Aguado et al. 2006), a test of words and nonwords, and the test of memory of digits of WAIS (Kaplan & Sacuzzo 2005). We present a research on eleven Spanish speaking children with Specific Language Impairment, compared with eleven normal developing age-matched children. Children are aged between 4 and 9 years old. We found that normal developing children have higher scores on the tests of words (P < 0.05), nonwords (P < 0.05), and digits (P < 0.05) than children with SLI. We compared the results of the nonword repetition test for SLI and normal developing children, with high a low frequent syllables. The nonword repetition test differentiates normal developing children from SLI children better than the word repetition test; low frequency sillables differentiates both groups better than high frequency syllables. We can conclude that phonological short term memory is a crucial factor that determines specific language impairment in childhood from very early on.




The development of sound-shape correspondence in the monolingual and bilingual mind

Pejovic, J. , Molnar, M. & Martin, C.

Basque Center on Cognition, Brain, and Language, Donostia, Spain

Sound-shape correspondence represents a bias in multisensory integration between acoustic and visual information, specifically between the label (auditory) and shape (visual) of an object. For instance, participants tend to associate the pseudo-word (PW) kiki with angular objects, but the PW bouba with rounded objects. This bias is known as the bouba-kiki effect and it has been observed in both monolingual adults (Nielsen&Rendall, 2011) and infants (Ozturk, et al., 2012) from various language backgrounds. However, it is unclear whether this effect is specific to the combination of phonemes found in bouba and kiki or the effect can be overextended to other PWs specific to the participants? native language background(s). Therefore, the aim of the current set of experiments is to identify PWs that are associated with angular and round shapes by adult monolingual and bilingual users of Basque and Spanish; as well as to test whether infants exhibit similar biases. In Experiment 1, 6 Spanish, 6 Basque monolinguals and 6 Basque-Spanish bilingual adults rated auditory PWs on four dimensions: roundness, angularness, Spanish-likeness, and Basque-likeness. Overall, the Spanish and bilingual groups rated the PWs as significantly more Basque-like than the Basque group; the Spanish group rated the PWs as significantly more round than the bilingual and Basque group. To test whether Basque-Spanish monolingual and bilingual infants exhibit similar sensitivities to shapes and sounds as adults do, we have selected two pairs of native language-appropriate stimuli based on Experiment 1: rounded PW vs. angular PW; neutral PW vs. neutral PW. These pairs, in different experiments, have been presented to 4-month-old infants using a preference looking paradigm in response to congruent (shape-sound match based on adult ratings) and incongruent (shape-sound mismatch based on adult ratings). Preliminary infant results in relation to the adult findings will be discussed.




©2010 BCBL. Basque Center on Cognition, Brain and Language. All rights reserved. Tel: +34 943 309 300 | Fax: +34 943 309 052