[PS-2.7] Semantic processing of American Sign Language sentences: effects of ambiguity and word order

Lieberman, A. & Wienholz, A.

Boston University

When processing spoken language sentences, listeners continuously make and revise predictions about the upcoming linguistic signal. In contrast, during comprehension of American Sign Language (ASL), signers must simultaneously attend to both the unfolding linguistic signal and the surrounding scene via the visual modality. Little is known about how signers integrate these overlapping signals incrementally in real-time. In the current study, we measure how signers resolve referential ambiguity during real-time comprehension of ASL sentences.
Participants were native-signing deaf adults and children (4-8 years). Signers were presented with sentences such as 'LOOK-FOR BLUE WHAT? BALL.' The degree of ambiguity in the visual scene was manipulated at both the adjective and noun level. Adult participants shifted gaze to the target earlier in sentences with no early ambiguity. Child participants, while slower overall than adults to initiate gaze shifts, were also significantly faster to identify the target in sentences with no early ambiguity. These findings demonstrate that semantic processing in ASL is driven by the same predictive relationships between the unfolding linguistic signal and the surrounding visual scene as is spoken language processing. These skills appear to be partially developed in young children provided they are exposed to sign language from a young age.