OS_39. Implicit learning
Sunday, October 02nd, 2011 [17:00 - 18:00]
OS_39.1 - Grammatical judgments and simultaneous reports: No evidence of implicit artificial grammar knowledge
Marescaux, P. 1, 2 & ROUJON, D. 1, 2
1 LAPSCO - CNRS UMR 6024
2 Université Blaise Pascal - Clermont Ferrand - France
Implicit learning was early described as the acquisition of abstract knowledge about rule-governed environments that takes place largely in the absence of explicit knowledge about what was acquired. Although it was further acknowledged that this rendition constituted an oversimplification, few studies have tried to analyze verbal reports and their relation to actual performance. Two experiments on artificial grammar learning were devoted to this issue. In Experiment 1, participants having been exposed to strings generated by an artificial grammar took a grammaticality judgment test in which they were immediately asked to explain each decision they made. Verbal reports contained mainly justifications in terms of admissible/inadmissible bigrams or trigrams with additional information about their position sometimes. A computed verbal performance measure positively correlated with performance on the grammaticality test. In Experiment 2, verbal reports from each original trained participant were summarized on a sheet delivered to a yoked participant who achieved the grammaticality test without prior exposure to the grammar. Performances of original and yoked participants on the grammaticality test were indistinguishable. Overall, the findings suggest that when full opportunities are given for explicit knowledge to emerge, implicitly acquired knowledge may be wholly elicited.
OS_39.2 - Subliminal exposure and indirect test: Evidence of passive processing in artificial grammar learning
Roujon, D. 1, 2 & Marescaux, P. 1, 2
1 LAPSCO - CNRS UMR 6024
2 UNIVERSITE BLAISE PASCAL - CLERMONT FERRAND - FRANCE
Is implicit learning a non-intentional cognitive process? Much empirical evidence supporting this issue comes from standard artificial grammar learning experiments. However, paradigms used in these studies offer some opportunities for explicit processing to occur. At encoding, time is allowed to examine the material to be memorized. At retrieval/use of the stored information, the usual grammaticality test requires to tell about the rule-based nature of the material. In two experiments, these potential pitfalls were eliminated or not by presenting grammatical strings either subliminally (29-ms), sub-optimally (100-ms) or optimally (5000-ms) at study time and by giving subsequently either a liking or a grammaticality test. In Experiment 1, new strings were rated individually for grammaticality or for liking. Grammatical items were rated higher than ungrammatical ones only in the grammaticality task and this, regardless of prior exposure duration. In Experiment 2, the new strings (grammatical and ungrammatical) were presented in a forced choice test. Grammatical items were preferred to ungrammatical items in all conditions. However, more grammatical items were chosen in the grammaticality compared to the liking test. Overall, the findings support that implicit learning can be purely non-intentional. Nevertheless, our cognitive system tends to exploit each opportunity for additional explicit treatment.
OS_39.3 - A statistical account of the starting small effect on learning a complex hierarchical grammar in AGL
In an artificial grammar learning (AGL) study, Lai & Poletiek (2011) found that human participants could learn a centre embedded recursive grammar only if the input during training was presented in a staged fashion. Previous AGL studies with randomly ordered input, failed to demonstrate learning of such a centre embedded structure. In the account proposed here, the staged input effect is explained by a fine tuned match between the statistical characteristics of the incrementally organized input and the development of human cognitive learning over time, from low level and linearly associative, to hierarchical processing of long distance dependencies. Interestingly, the model suggests that staged input seems to be effective for learning hierarchical structures only, and is unhelpful for learning linear grammars.