From lips to lexicon: Does visual speech activate lexical representations?

Fort, M. 1 , Kandel, S. 1, 3, 2 , Chipot, J. 1 , Savariaux, C. . 3 , Granjon, L. . 3 & Spinelli, E. 1, 4, 2

1 Laboratoire de Psychologie et NeuroCognition. University of Grenoble. Grenoble, France.
2 Institut Universitaire de France. Paris, France
3 GIPSA-Lab/Dpt. Parole et Cognition. University of Grenoble. Grenoble, France
4 Universtiy of California. Berkeley, California, USA.

Seeing the speaker’s articulatory gestures enhances phoneme perception, especially in noisy environments. Previous studies provide evidence that visual speech may also contribute to lexical access. To address this issue, we used a fragment priming procedure paired with a lexical decision task, in which the primes were syllables that either shared or not the initial syllable with an auditory word or pseudo-word. In Experiment 1, primes were displayed in audiovisual (AV), audio only (AO) or visual only (VO) conditions. The analyses on words revealed priming effects for the AV, AO but also for VO primes. The latter indicates that visual speech facilitates the subsequent processing of an auditory word. In Experiment 2, we compared the priming effect in the VO condition with words of high and low-frequency. The results showed that the effect is stronger with low-frequency words, indicating that the locus of this facilitation was lexical rather than pre-lexical. This provides evidence that visual information mediates word recognition processes essentially when a lexical unit requires a large amount of activation to be recognized (e.g., a low frequency word). These experiments demonstrate that the visual perception of the initial phonemes is enough information to activate lexical candidates. Seeing the articulatory gestures of a speaker facilitates the early phases of spoken word recognition.