Saltar al contenido | Saltar al meú principal | Saltar a la secciones


Timing and Language.

Saturday, October 01st,   2011 [08:30 - 10:30]

SY_10. Timing and Language

de Diego-Balaguer, R. 1, 2 & Kotz, S. 3

1 ICREA, University of Barcelona and IDIBELL, Barcelona, Spain
2 INSERM U955 and Ecole Normale Supérieure, Paris, France
3 MPI for Human Cognitive and Brain Sciences, Leipzig, Germany

Speech is an acoustically complex collection of sounds that need to be decoded into meaningful phonemes, syllables and words. Although it is obvious that sounds in speech are sequentially processed and precise timing is necessary for its perception and production, this temporal dimension has often been ignored in the study of language. Recent data and novel frameworks of speech processing are nevertheless emerging that highlight the importance of temporal processing at different levels of language processing. The proposed symposium will present work on different aspects of temporal processing in relation to language, including the cortical and subcortical networks implicated in auditory motor integration and timing, and the importance of temporal processing in healthy and pathological language acquisition. Virginia Penhune presents the importance of rhythmic processing in audio-motor integration relevant for musical and speech processing. Franck Ramus’ data illustrates the effects of deficient sampling rate in the auditory cortex in the development of different deficits associated with dyslexia. Ruth de Diego Balaguer presents results indicating that rhythmic information in prosody acts as an attentional cue to allocate attention selectively to the times they expect to hear critical speech segments and enhance language learning. William Idsardi will focus his talk in the phonetic information encoded in an early auditory response. Finally, Sonja Kotz will present data supporting an integrative framework highlighting the involvement a cortical and subcortical network for temporal and predictive coding in speech processing.



SY_10.1 - Auditory-motor interactions in musical rhythm perception and production

Penhune, V.

Laboratory for Motor Learning and Neural Plasticity, Department of Psychology, Concordia University

The work that I will present in this talk was motivated by the observation that auditory and motor information appear to be preferentially coupled in both music and speech. This suggested to us that there might be preferential interactions between the auditory and motor systems of the brain. Based on this hypothesis, we have conducted a series of neuroimaging experiments designed to identify the brain networks involved in integrating auditory and motor information. To do this, we have examined performance of rhythm synchronization tasks in order to identify the features of auditory stimuli that facilitate motor response. My talk will review evidence from functional magnetic resonance imaging (fMRI) studies conducted to elucidate the neural basis for interactions between the auditory and motor systems in the context of musical rhythm perception and production. Our results show that auditory features of rhythmic stimuli exert a strong influence on motor performance, and that motor regions of the brain are sensitive to the temporal organization of auditory stimuli. Finally, I will propose a model for auditory-motor interactions in rhythm production that engage the posterior superior temporal gyrus, and the ventral and dorsal premotor cortex, as well as ventrolateral and dorsolateral prefrontal cortex. These findings will be also discussed in the context of models of auditory-motor integration for the perception and production of speech rhythms.

SY_10.2 - Altered cortical entrainment to fast acoustic modulations reflect phonological and working memory deficit in dyslexia

Lehongre, K. 1 , Ramus, F. 2 , Schwartz, D. 3 , Pressnitzer, D. 4 & Giraud, A. . 1

1 Inserm U960 - Ecole Normale Supérieure, Paris, France
2 LSCP CNRS UMR 8554, Paris, France,
3 CRICM, CNRS UMR 7225, Inserm UMR-S 975, Paris, France
4 4UMR 8158 CNRS - U. Paris Descartes & DEC, Ecole Normale Supérieure, Paris, France

Whether dyslexia primarily reflects an auditory, phonological or memory deficit has been intensely debated for the past 30 years. We hypothesized that an anomaly in phonemic sampling could account for both phonological and working memory deficits. We used a frequency tagging MEG paradigm and MRI structural imaging to assess cortical entrainment to acoustic modulations ranging from 10 to 80Hz, a property that reflects the cortical ability to sample sensory inputs. We expect dyslexic subjects to exhibit abnormal responses in the 30-40Hz frequency range that carries important phonemic cues. While normal readers exhibited left-dominant auditory steady state responses around 30 Hz, dyslexic subjects only showed enhanced entrainment to modulation frequencies outside the phonemic range up to 80 Hz. The 30 Hz entrainment deficit in the left auditory cortex correlated positively with behavioral measures of phonological output processing, but negatively with those reflecting phonological input. In addition, entrainment to faster rates negatively correlated with verbal working memory capacity. In dyslexics, the left auditory cortex fails to selectively entrain to acoustic modulations conveying phonemic cues, but phase-locks to faster acoustic modulations. While the latter anomaly accounts for verbal working memory deficits in dyslexia, the former one
accounts for distinct facets of the phonological deficit.

SY_10.3 - Prosody facilitates language learning in adults by orienting attention

de Diego-Balaguer, R. 1, 2, 3, 4 , Lopez-Barroso, D. 2 , Rodriguez-Fornells, A. 1, 2 & Bachoud-Lévi, A. 3, 4, 5, 6

2 University of Barcelona and IDIBELL, Barcelona, Spain
3 UPEC and IRBM, Créteil, France
4 Ecole Normale Supérieure, Paris, France
5 AP-HP Groupe Henri-Mondor Albert-Chenevier, Créteil, France
6 Centre de référence Maladie de Huntington, Créteil, France

Prosody is a rhythmic cue that has a critical role in speech processing. However, the mechanism by which prosodic information affects the way we treat the speech signal and influences learning is scarcely understood. In two different experiments we recorded event-related potentials (ERPs) and functional hemodynamic changes (fMRI) while participants were learning artificial languages with and without prosodic cues, implemented by the introduction of subtle pauses between words. Languages were built concatenating trisyllabic words with embedded rules (e.g.“puliku, pufaku, pureku”) analogous to simple morphosyntactic dependencies (e.g.“is playing, is dancing, is talking”). The absence of prosodic cues induced a selective increase of activation in the left ventrolateral prefrontal cortex both for structured language and for random syllable streams where learning was not possible. In the ERP experiment, this effect arose very early in sensory processing in a negative increase around 100 ms after syllable onset (N1 component). Structured streams with prosodic cues showed an additional positive going increase around 200 ms (P2 component) associated to rule learning. This effect characterised also those participants that learned the rule in the absence of prosodic information displaying increased bilateral medial parietal cortex activation associated to the top-down attention system. Two conclusions could be derived from the present pattern of results: (i) prosody helps segmentation acting as a sensory cue that automatically triggers attention and (ii) it helps as a cue to reorient attention to timing information relevant for rule learning. This function fits well with the observation that in natural languages prosodic boundaries in speech coincide with syntactic boundaries.

SY_10.4 - Phonetic information encoded in an early auditory response

Idsardi, W.

University of Maryland, USA

Speech comprehension and speaker recognition are generally accurate, fast and effortless. This suggests that we should be able to find robust neural correlates of the acoustic and phonetic information that listeners use to make these decisions. In this talk I will review a recent series of magneto-encephalographic (MEG) studies examining the timing and localizations of a major, early auditory cortical response, the M100 (also termed the N1m for its relation to the electro-encephalographic N1 response). This response occurs approximately 100ms after the onset of a well-defined acoustic event and various properties of this response are known to be correlated with phonetic properties of interest. For example, the latency of the M100 response varies with the frequency of both pitch (included inferred pitch, Monahan, de Souza & Idsardi 2008) and formant structure (Monahan & Idsardi 2010). In addition, the inferred location of the generator of the M100 within the auditory cortex (Scharinger, Merickel, RIley & Idsardi 2011; Scharinger, Poe & Idsardi 2011) forms a kind of tonotopic and articulo-topic map, with the anterior-posterior location reflecting the place of articulation (F2 and labial, coronal or dorsal), the superior-inferior axis reflects front vowel height and F1, and the medial-lateral axis reflects overall spectral gravity (e.g. rounding and F3) in vowels and in sibilants (Lago, Krorod, Scharinger & Idsardi 2010). In addition, the M100 also contains information relevant for other speech categories, including speaker identity and dialect affiliation (Scharinger, Monahan & Idsardi 2011).

SY_10.5 - Effects of timing and rhythm in auditory and speech processing

Kotz, S.

Neurocognition of Rhythm in Communication Group, MPI for Human Cognitive and Brain Sciences, Leipzig, Germany

Cortical neural correlates of linguistic functions are well documented in the neuroscience and neuropsychological literature. However, the influence of non-linguistic functions such as rhythm and timing are still understudied in speech and auditory language research. This is surprising for several reasons as these functions (i) play a critical role during learning, (ii) have a compensatory function in brain diseases and developmental disorders, (iii) can reveal commonalities/differences between domains (e.g. music and language), and (iv) can further our understanding of subcortical contributions to auditory linguistic and non-linguistic functions. For example, basal ganglia and cerebellar circuitries are involved in beat perception, timing, attention, memory, language, and motor behaviour (see Kotz, Schwartze, & Schmidt-Kassow, 2009). I will discuss our recent speech processing framework (Kotz & Schwartze, 2010) which synthesizes evolutionary, anatomical, and neurofunctional concepts of auditory, temporal and speech processing. This framework will be supported by recent event-related potential (ERP), and functional magnetic resonance imaging (fMRI) data from healthy, patient, and L2 populations which demonstrate the impact of timing and rhythm in auditory, speech and language processing.

©2010 BCBL. Basque Center on Cognition, Brain and Language. All rights reserved. Tel: +34 943 309 300 | Fax: +34 943 309 052