[PS-1.20] Visual Perception and Sign-Supported Speech in Deaf Adolescents

Mastrantuono, E. , Saldaña, D. & Rodríguez Ortiz, I.

Universidad de Sevilla

The effectiveness of sign-supported speech (SSS) for deaf adolescents in the debate of an inclusive education has recently returned of interest.
We investigated discourse comprehension and inference generation (45 participants) when stimuli were transmitted in SSS, spoken language and sign language (LSE). Participants' eye movements were tracked. The attention of participants for signs of SSS was explored in tasks using basic sentences (36 participants): we manipulated the visual conditions so that participants directed their gaze towards the linguistic channel they were effectively processing. We also developed a task with sign and speech occasionally mismatching.
Discourse comprehension and inference generation of cochlear implant users were not increased by the use of SSS. Nevertheless, comprehension scores and eye tracking data when processing basic sentences revealed that they were processing the signs of SSS. It is suggested that, at discourse-level, SSS is less effective because comprehension involves a greater number of variables than vocabulary-only. SSS increased comprehension of native signers compared to spoken language, especially when comprehension required inference processing. A higher comprehension of LSE than SSS and a more extensive use of peripheral vision to perceive signs in LSE highlighted differences in processing a native language and an artificial signing system.