Babies hear significantly more speech than music at home

Speech and music are the dominant elements of a child’s auditory environment. Although previous research has shown that speech plays a crucial role in children’s language development, less is known about the music babies hear.

A new study from the University of Washington, published May 21 in Developmental science, is the first to compare the amount of music and speech children hear in infancy. The results showed that babies hear more spoken language than music, with the gap widening as babies get older.

“We wanted to get a snapshot of what’s happening in babies’ home environments,” says corresponding author Christina Zhao, research assistant professor of speech and hearing sciences at the UW. “There are a lot of studies that have looked at how many words babies hear at home, and they have shown that the amount of child-directed speech is important for language development. We realized that we know nothing about what kind of music babies are into.” hearing and how it relates to speech.”

Researchers analyzed a data set of full-day audio recordings collected in the home environments of English-speaking infants aged 6, 10, 14, 18 and 24 months. At each age, infants were exposed to more music from an electronic device than from a personal source. This pattern was reversed for speech. While the percentage of speech intended for babies increased significantly over time, it remained the same for music.

“We are shocked at how little music there is in these recordings,” said Zhao, who is also director of the Lab for Early Auditory Perception (LEAP), housed at the Institute for Learning & Brain Sciences (I-LABS). “The majority of the music is not intended for babies. We imagine these are songs streaming in the background or on the radio in the car. A lot of it is just ambient.”

This differs from the highly engaging, multi-sensory movement-based music intervention that Zhao and her team had previously implemented in laboratory settings. During these sessions, music was played while babies were given instruments, and researchers taught caregivers how to synchronize their babies’ movements to music. A control group of babies then came to the laboratory to play.

“We did that twice,” Zhao said. “Both times we saw the same result: music intervention improved babies’ neural responses to speech sounds. That got us thinking about what would happen in the real world. This study is the first step in that larger question.”

Previous studies have relied largely on qualitative and quantitative reports from parents to examine musical input into infants’ environments, but parents tend to overestimate the amount they talk or sing to their children.

This study bridges the gap by analyzing a full day’s worth of auditory recordings made with Language Environment Analysis (LENA) recording equipment. The recordings, originally made for a separate study, documented babies’ natural sound environments for up to 16 hours per day for two days at each recording age.

Researchers then crowdsourced the process of annotating the LENA data through the citizen science Zooniverse platform. Volunteers were asked to determine whether there was speech or music in the clip. When speech or music was identified, listeners were asked whether it came from a personal or electronic source. Finally, they judged whether the speech or music was intended for a baby.

Because this study consisted of a limited sample, researchers are now interested in expanding their data set to determine whether the result can be generalized to different cultures and populations. A follow-up study will examine the same type of LENA recordings of infants in Latinx families. Because audio recordings have no context, researchers are also interested in when musical moments occur in babies’ lives.

“We are curious to see whether the music input is correlated with any developmental milestones later for these babies,” Zhao said. “We know that speech input is highly correlated with later language skills. From our data we see that speech and music input are not correlated – so it’s not the case that a family that tends to talk more will also have more music. We’re trying it to see whether music contributes more independently to certain aspects of development.”

Other co-authors were Lindsay Hippe, former UW student with an honors thesis and prospective master’s student in clinical research, speech-language pathology; Victoria Hennessy, LEAP Research Assistant/Laboratory Manager; and Naja Ferjan Ramírez, assistant professor of linguistics and adjunct research professor for I-LABS. This study was funded by the National Institute on Deafness and Other Communication Disorders of the National Institutes of Health.