Sunday, October 02nd, 2011 [09:30 - 11:10]
SY_24. Auditory learning
How does the auditory system solve complex learning challenges like those posed by natural speech where a continuous acoustic signal is ultimately parsed into discrete phoneme classes? In this symposium, a number of researchers in auditory cognitive neuroscience address this question. One focus is on the conditions under which auditory learning takes place. It is becoming increasingly clear that in addition to classic supervised categorization training, auditory perception can be shaped by the multimodal regularities of the world without requiring overt categorization of the sound or explicit feedback. Also, even simple repeated exposure of otherwise meaningless noise burst induces unsupervised learning with striking properties. For example, multiple noises can be remembered for weeks. These findings have been extremely useful in elucidating how exposure, reinforcement, and attentional mechanisms interact to produce learning. We also address whether learning to categorize speech sounds into phonemes is fundamentally different from learning non-speech sounds. For any auditory adaptive system, there are two conflicting requirements: It should be as stable as possible, but it should also be able to adjust to a changing environment. The balance between these two requirements, though, may be different for speech and non-speech with potential consequences for how sounds are learned. Finally, we will address how perceptual changes in sound processing are accompanied by specific changes in the cortical sound representations, using fMRI and multivariate pattern analysis (MVPA). Ultimately, we hope this symposium will deepen our understanding of human auditory processing as an adaptive, experience-dependent perceptual system.
SY_24.1 - Rapid auditory learning for meaningless sounds
Agus, T. 1, 2 & Pressnitzer, D. 1, 2
1 Laboratoire Psychologie de la Perception, UMR 8158, CNRS & U. Paris Descartes
2 Ecole normale supérieure, 29 rue d’Ulm, 75005 Paris, France
One basic goal of auditory perception is to recognize the plausible physical causes of incoming sensory information. In order to do so, listeners must learn recurring features of complex sounds and associate them with sound sources. However, how memories emerge from everyday auditory experience with arbitrary complex sounds is currently largely unknown. We will describe a novel psychophysical paradigm designed to observe the formation of new auditory memories [Agus, Thorpe, & Pressnitzer, Neuron, 2010]. The behavioral measure was based on the detection of repetitions embedded in 1s-long noises. Unbeknownst to the listeners, some noise samples re occurred randomly throughout an experimental block. In line with our hypothesis, repetitions in these re-occurring noises were detected more frequently, showing that repeated exposure could induce learning of otherwise meaningless sounds. The learning displayed several striking features: it was unsupervised and resilient to interference from other task-relevant noises. When memories where formed, they emerged rapidly, performance became abruptly near-perfect, and multiple noises were remembered for several weeks. The acoustic transformations to which recall was tolerant suggest that the learnt features were local in time and generalizable over a range of frequencies. We will also present new results: in subsequent experiments, listeners were presented learnt noises but without a within-trial repetition. They often mistakenly reported that these stimuli were repeated, suggesting that they relied to some extent on noise recognition, rather than repetition detection. Listeners were also able to learn sounds in the absence of within-trial repetitions, showing that the auditory learning mechanism could function at larger interstimulus intervals. Based on these results, we hypothesize that the ubiquitous rapid plasticity we observed could be key to the efficient formation of auditory memories. As the noise-learning paradigm uses totally meaningless sounds, it is well-suited to studying the effect of auditory learning on low-level perception.
SY_24.2 - How the challenges of speech perception can inspire investigations of general auditory learning
Carnegie Mellon University
A rich history of research with adults and infants informs us about the ways experience with the native language shapes speech perception. However, in part because it is so difficult to control and manipulate speech experience, we know very little about the learning mechanisms that are responsible. Moreover, there has been somewhat limited attention to how the auditory system solves complex learning challenges like those posed by speech signals. I will describe the results of a series of studies that exploit artificial, nonlinguistic sounds that mimic some of the complexities of speech to gain experimental control over listeners’ histories of experience and, ultimately, to leverage this control to work toward mechanistic explanations of auditory learning. We have exploited classic supervised categorization training techniques commonly employed in visual cognition as well as a more naturalistic videogame training paradigm that models multimodal regularities of the world without requiring overt categorization or providing explicit feedback. Our results demonstrate the feasibility of relating general auditory learning to better understand speech processing and indicate the ways in which auditory perception is jointly shaped by the acoustic signal, long-term learned representations, and regularities of the immediate environment. The literature in this area is not yet large, but already there are insights. We argue that progress in understanding speech processing can be made by understanding the boundaries and constraints of auditory cognition, in general. Reciprocally, our understanding of human auditory processing is deepened by studying the complex, experienced-dependent perceptual challenges presented by speech. Long relegated as a special system that could tell us little about general human cognition, the study of speech perception as a flexible, experience-dependent perceptual skill has much to offer the development of a mature auditory cognitive neuroscience.
SY_24.3 - Accents, Assimilation, and Auditory Adjustments
Samuel, A. 1, 2, 4 & Kraljic, T. 3
1 Basque Center on Cognition, Brain, and Language
2 Stony Brook University
3 University of Pennsylvania, Nuance communications
4 Ikerbasque, Basque Foundation for Science
The perceptual system’s two main requirements are potentially conflicting: It should be as stable as possible, but it should also adjust to a changing environment. A growing body of research is clarifying how the system balances these requirements in the perception of speech. Many studies have shown that when a listener receives an ambiguous phonetic input, additional context (e.g., lexical; visual ) is used both to resolve the phonetic ambiguity in the moment, and to adjust the associated perceptual representation that informs subsequent perception. For example, if the /s/ in “Tennessee” is produced with a somewhat “sh”-like quality, the lexical context both determines that the segment was /s/, and expands the /s/ category for later inputs. Kraljic, Brennan, & Samuel (2008) tested a possible constraint on this adjustment process by exposing listeners to these ambiguous segments, but only in a particular context: /str/ (as in “abstract”, or “district”). In the participants’ local dialect of American English, /s/ before /tr/ is typically produced as precisely such an ambiguous sound. The theoretical question was whether perceptual adjustment would occur under these circumstances. It did not. Thus, the same ambiguous segments that reliably generate retuning in other contexts do not do so in the /str/ context, implicating an additional factor. Kraljic et al. discussed the possibility that a dialectal “explanation” for phonetic ambiguity might block the adjustment process. However, they noted that the shift in /s/ before /tr/ is also a form of place assimilation. Thus, the blocking of adjustment could either be due to dialect, or to assimilation. In the current work, we tease these two cases apart, testing a case of dialect without assimilation, and a case of assimilation without dialect. The results favor blocking based on assimilation, rather than dialect, clarifying the processing levels that are subject to perceptual adjustment.
SY_24.4 - Task-irrelevant auditory learning
Seits, A. 1 & Protopapas, A. 2
1 University of California, Riverside
2 University of Athens
Numerous studies of visual learning have shown that task-irrelevant stimuli can be learned when they are paired with important behavioral events. These studies of task-irrelevant perceptual learning (TIPL) have helped elucidate how reinforcement and attentional mechanisms interact to produce learning. Here, I will present data from two studies of auditory TIPL. We show that detection of formant transitions (changes in spectral energy peaks) can be learned through TIPL. In the second study, we found that non-native speech sound contrasts can also be learned. Interestingly, the magnitude of the learning effects through TIPL is similar to that found through direct, explicit and attended training on the same stimuli. These studies help demonstrate the generality of TIPL to audition and show promise for TIPL as a methodology to aid adult learners of new languages.
SY_24.5 - Representations of newly-learned sound categories
Ley, A. 1, 2 , Hausfeld, L. 1 , Vroomen, J. 2 , Valente, G. 1 , de Weerd, P. 1 & Formisano, E. 1
1 University of Maastricht
2 Tilburg University
Mapping different sounds onto the same identity requires the extraction of relevant features for enhancing between-category and minimizing within-category differences. We used complex artificial sounds (ripples), fMRI and multivariate pattern analysis (MVPA) to investigate the relation between behavioral and neural changes in the course of category learning. Subjects were scanned twice during passive listening, once before category training and once after successful learning of pitch categories. Pre- and post-training classification accuracies were compared for the relevant (i.e. consistent with the behavioral categorization rule) and irrelevant stimulus labels. Sound identification curves gradually change into a sigmoid shape, reflecting successful category learning. MVPA revealed that the most discriminative voxels were highly distributed over the auditory cortex, and included locations of early auditory areas. Perceptual changes associated with feature specific category learning are thus accompanied by specific changes in the cortical sound representations.