Saltar al contenido | Saltar al meú principal | Saltar a la secciones

ESCOP 2011, 17th MEETING OF THE EUROPEAN SOCIETY FOR COGNITIVE PSYCHOLOGY 29th Sep. - 02nd Oct.

Speech perception/ Auditory perception

Sunday, October 02nd,   2011 [18:00 - 20:00]

PS_3.076 - Consonants and vowels support rule learning in rats

de la Mora, D. & Toro, J. M.

Universitat Pompeu Fabra

Recent research suggests that structural generalizations seem to be preferentially performed over vowels, but not over consonants. Nevertheless, the source of these functional differences between consonants and vowels is unknown. One possibility is that participants transfer the acoustic differences between consonants and vowels to functional differences. If so, we could expect to find similar results in nonhuman animals. Our aim was to study rats’ capacity to generalize rules implemented over vowels and consonants. In Experiment 1, rats were trained to discriminate CVCVCV nonsense words in which vowels followed an AAB structure in half of the words and an ABC structure in the other half, whereas consonants were combined randomly. In Experiment 2, rules were implemented over the consonants and vowels varied at random. In the test phase of both experiments eight new test words were presented. Following the presentation of each AAB or ABC word lever-pressing responses were registered and food was delivered. We found that rats could generalize to new tokens rule-like structures over both vowels and consonants. Our results support the hypothesis that acoustic differences between consonants and vowels, per se, are insufficient to trigger differences over which units are preferentially used for rule learning.




PS_3.077 - The “spoon effect”: How a spoon over the tongue alters the perception of the vowel /e/

Schmitz, J. & Sebastián-Gallés, N.

Brain and Cognition Unit. Universitat Pompeu Fabra. Barcelona, Spain.

Recent TMS studies have shown that speech perception can be influenced by activating the motor areas involved in articulating the same sound. In this experiment we test if blocking the articulation movements of the vowel /e/ by placing a spoon over the tongue can alter the perception of different /e/ sounds in a similar way. The vowel /e/ is a close-mid-front vowel, articulated by lifting the front part of the tongue to a middle height in the mouth. A spoon over the tongue influences this movement by pressing the front part of the tongue down. The results show that when participants have a spoon over the tongue, they accept /e/ variants with a higher tongue position in the back of the mouth more often compared to when they have no spoon in the mouth or a spoon at the side of the mouth. This indicates that participants take into account their current tongue position (front part of the tongue down and back part of the tongue relatively more up) when rating the different /e/ sounds they hear. This is in line with previous research showing a role of the motor cortex in speech perception in difficult tasks.




PS_3.078 - Eye tracking during French Cued Speech perception: preliminary results

Bayard, C. 1, 2 , Tilmant, A. 1, 2 , Leybaert, J. 1 & Colin, C. 2

1 Laboratoire Cognition Langage Développement, ULB, Brussels, Belgium
2 Unité de recherche en Neurosciences Cognitives, ULB, Brussels, Belgium

French Cued Speech (CS) was developed to help deaf people to understand speech. Since this system is multi-signal, involving lip movements and cues (hand movements), we conducted an eye tracking study to examine whether this perception implies integrative treatment and how expertise affects it. Our paradigm consisted of three conditions without sound: (1) a multi-signal condition consisting of a speaker’s video who simultaneously spoke and cued words/pseudowords, (2) a meaningless multi-signal condition consisting of a video showing a speaker producing words/pseudowords with meaningless hand movements, (3) and a lipreading condition, consisting of a video showing a speaker uttering words/pseudowords without movement. Participants were presented three options (i.e. correct answer, labial distractor and gestural distractor) and instructed to select the correct answer from among the three. Distractors were words/pseudowords that shared the same labial image or cue as the words/pseudowords uttered. Behavioral and eye tracking data (i.e. interest region: lips or hand) were collected on two groups of hearing people: beginner CS-experts and completely naïve toward CS. The first results, very promising, suggest that only beginner CS-experts integrate cue and labial information. We are currently testing hearing experienced CS-experts and deaf CS-experts. This new data will be reported at the conference.




PS_3.079 - Training French listeners to perceive word stress

Peperkamp, S. 1 & Brazeal, J. 2

1 Laboratoire de Sciences Cognitives et Psycholinguistique, ENS, Paris, France
2 Department of French and Italian, University of Texas at Austin, Austin, USA

Native speakers of French, a language without contrastive stress, have difficulty perceiving stress contrasts. Using a pretest-posttest design with 10 trainees and 10 controls, we examined whether French listeners can improve their stress perception with auditory training. We used naturally spoken, phonetically varied stimuli in a sequence recall task, in which participants have to recall sequences of two auditorily presented non-words that differ either in the position of stress (test condition) or in a phoneme (control condition). At the end of six 30-minute training sessions on stress contrasts, the trainees showed no improvement in their perception of stress: an ANOVA with factors Session (Pretest/Posttest), Group (Trainees/Controls) and Contrast (Phoneme/Stress) yielded an effect of Contrast only (F(1,18)=53.9, p<.0001), with worse performance on the stress contrast. This result contrasts with previous findings that listeners can be effectively trained to improve their perception of non-native contrasts. We argue that the lack of a training effect is task-specific. In particular, contrary to previous studies that used a 2AFC identification task, in the sequence recall task participants can neither use a low-level acoustic response strategy nor rehearse the stimuli subvocally. We discuss the consequences for theories of phonological learning.




PS_3.080 - Malleability of the French voicing perception after auditory training in young children

Collet, G. 1, 2, 3 , Leybaert, J. 2 , Serniclaes, W. 4 & Colin, C. 1

1 Unité de Recherche en Neurosciences Cognitives, Université Libre de Bruxelles, Bruxelles, Belgium
2 Laboratoire Cognition, Langage, Développement, Université Libre de Bruxelles, Bruxelles, Belgium
3 Fond National de Recherche Scientifique (FNRS), Bruxelles, Belgium
4 Laboratoire Psychologie de la Perception (CNRS), Université René Descartes, Paris 5, France

The present study aimed at investigating the effects of an auditory identification training on the categorical perception of a/də/-/tə/ voicing continuum in healthy French speaking 6-year-old children. The training consisted of fourteen thirty minutes identification sessions (fading procedure) with feedback designed to emphasize the temporal cue (Voice Onset Time - V.O.T.) to be trained. For 10 children, training was focused on the French phonological boundary (0 ms V.O.T.) and for ten other children, training was focused on a universal boundary (-30 ms V.O.T.). Ten other control children did not receive any training. Pre- and post-training assessments were performed through identification and discrimination tasks aimed at evaluating categorical perception along the entire V.O.T. continuum. Whereas no significant change was observed in the control group, boundary precision (across the French phonological boundary) increased in the 0 ms V.O.T. training group. Data are currently collected for the -30 ms V.O.T. training group. These results showed that categorical perception of voicing may be improved in healthy children after quite a few training sessions.




PS_3.081 - Auditory memory: It is auditory, but it’s not memory

Macken, B. & Jones, D.

School of Psychology, Cardiff University, U.K.

The ability to compare the frequency of two tones separated by an interval of a few seconds decreases as the length of the interval increases and is also impaired by the presence between standard and comparison tones of other, task-irrelevant tones. Such performance is typically attributed to auditory memory processes, such that a volatile representation of the first tone is subject to decay and/or interference as a function of time and/or the presence of similar intervening material. Here we show that such an auditory memory account is wrong since, in direct contradiction to such an account, tone discrimination can be shown to actually improve under conditions where the temporal interval between standard and comparison is increased and where the quantity of similar intervening material is increased. Rather than explaining this performance in terms of auditory memory, we argue that it reflects processes involved in comparing features within and across auditory objects, with the latter leading to poorer discrimination performance than the former.




©2010 BCBL. Basque Center on Cognition, Brain and Language. All rights reserved. Tel: +34 943 309 300 | Fax: +34 943 309 052