PS_3.078 - Eye tracking during French Cued Speech perception: preliminary results

Bayard, C. 1, 2 , Tilmant, A. 1, 2 , Leybaert, J. 1 & Colin, C. 2

1 Laboratoire Cognition Langage Développement, ULB, Brussels, Belgium
2 Unité de recherche en Neurosciences Cognitives, ULB, Brussels, Belgium

French Cued Speech (CS) was developed to help deaf people to understand speech. Since this system is multi-signal, involving lip movements and cues (hand movements), we conducted an eye tracking study to examine whether this perception implies integrative treatment and how expertise affects it. Our paradigm consisted of three conditions without sound: (1) a multi-signal condition consisting of a speaker’s video who simultaneously spoke and cued words/pseudowords, (2) a meaningless multi-signal condition consisting of a video showing a speaker producing words/pseudowords with meaningless hand movements, (3) and a lipreading condition, consisting of a video showing a speaker uttering words/pseudowords without movement. Participants were presented three options (i.e. correct answer, labial distractor and gestural distractor) and instructed to select the correct answer from among the three. Distractors were words/pseudowords that shared the same labial image or cue as the words/pseudowords uttered. Behavioral and eye tracking data (i.e. interest region: lips or hand) were collected on two groups of hearing people: beginner CS-experts and completely naïve toward CS. The first results, very promising, suggest that only beginner CS-experts integrate cue and labial information. We are currently testing hearing experienced CS-experts and deaf CS-experts. This new data will be reported at the conference.