Piazza, E. 1, 2 , Iordan, M. C. 1, 2 & Lew-Williams, C. 2
1 Princeton Neuroscience Institute, Princeton University
2 Department of Psychology, Princeton University
Infant-directed speech (IDS) is known to differ from adult-directed speech (ADS) along several key dimensions (e.g., pitch, rhythm). To begin breaking into language, infants must discern subtle statistical differences about people and voices in order to direct their attention toward the most relevant signals. Here, we uncover a new defining feature of IDS: mothers significantly alter statistical properties of their vocal timbre when speaking to their infants. Timbre, or tone color, is a statistical fingerprint of voices that helps us instantly identify people. Each human has unique timbre, but we found that mothers robustly shift their timbre when engaging with infants. We recorded 24 mothers' naturalistic speech while they interacted with their infant and with an adult experimenter in their native language. Half of the participants were English speakers, and half were not. Using a support vector machine classifier, we found that mothers consistently shifted their timbre between IDS and ADS. Importantly, this shift was highly similar across languages. These findings have theoretical implications for understanding how statistical learning supports infants' initial detection of structure, as well as how infants become attuned to their local communicative environments. Moreover, our classification algorithm has direct translational implications for speech recognition technology.