Does comprehension (sometimes) go wrong for noncanonical sentences?

Meng, M. 1 & Bader, M. 2

1 Merseburg University of Applied Sciences
2 University of Frankfurt

There is an ongoing debate whether the human parsing mechanism (HPM) derives sentence meaning always from representations computed algorithmically, therefore accurately reflecting the input, or whether the HPM sometimes resorts to nonalgorithmic strategies that may result in nonveridical representations. Misinterpretation effects for noncanonical sentences, such as passives (Ferreira, 2003), provide the major evidence in favor of models allowing for nonveridical representations. However, it is unclear whether these effects reflect errors in the mapping of form to meaning, or difficulties specific to the procedure used to test comprehension. We report a study combining two different comprehension tasks to address these alternative possibilities. Participants first judged the plausibility of canonical and noncanonical sentences and then (as in Ferreira, 2003) named the agent or patient/theme of the sentence. Both tasks require the correct identification of agent or patient/theme, but differ regarding the complexity of operations required to complete the task successfully. Crucially, participants made a substantial number of errors with naming the agent or patient/theme even if they had correctly assessed sentence plausibility. We conclude that misinterpretation effects do not indicate comprehension errors, hence cannot serve as evidence for nonveridical representations. Our results support models of the HPM that assume algorithmic processing only.