Speech or song? Determining how the brain perceives music

summary: New research explores the different ways the brain distinguishes between music and speech.

source: Cognitive Neuroscience Society

Most neuroscientists who study music have one thing in common: they play an instrument, in many cases from a young age. Their drive to understand how the brain perceives and is shaped through music stems from a deep love of music.

That passion has translated into a wealth of discoveries about music in the brain, including recent work outlining the ways the brain distinguishes between music and speech, as will be presented today at the Society for Cognitive Neuroscience (CNS) annual meeting in San Francisco.

“Over the past two decades, many excellent studies have demonstrated similar mechanisms between speech and music across many levels,” says Andrew Chang of New York University, a lifelong violinist, who organized a symposium on music and speech perception at the CNS meeting.

“However, a fundamental question, which is often overlooked, is what causes the brain to perceive music and speech signals differently, and why humans need two different auditory signals.”

The new work, enabled in part by computational advances, points to differences in pitch and rhythm as key factors enabling people beginning in childhood to distinguish between speech and music, as well as how the brain’s predictive capabilities lie in both speech and music perception.

Exploring auditory perception in infants

From a young age, cognitive neuroscientist Christina Vanden Bosch der Nederlanden of the University of Toronto, Mississauga, has been singing and playing the cello, which helped shape her research career.

“I remember sitting in the middle of the cello section and we were playing some particularly beautiful music — one where the entire cello section had the melody on it,” she says, “and I remember getting this emotional response and wondering ‘How could I possibly get such a response? The strong emotional vibrations of my strings transmitted to my ears? This looks wild! ”

This der Nederlanden experiment began a long journey of wanting to understand how the brain processed music and speech in early development. Specifically, she and her colleagues are investigating whether children, who learn communicative sounds through experience, even know the difference between speech and song.

“These seem like simple questions and have great theoretical significance in how we learn to communicate,” she says.

“We know that from the age of four, children can easily and clearly distinguish between music and language. Although this seems very obvious, there has been very little data asking children to make these kinds of distinctions.”

At the CNS meeting, der Nederlanden will present new data collected live before and during the COVID-19 pandemic on the vocal features that shape music and language during development. In one experiment, 4-month-old infants heard speech and song, in a lyrical, infant-guided manner, while recording their electrical brain activity using an electroencephalogram (EEG).

“This new work indicates that children are better at tracking the speech children make when they are spoken than when they are singing, and this is different from what we see in adults who are better at neural tracking of singing compared to spoken words,” she says.

They also found that both pitch and tone affect brain activity relative to speech compared to song, for example, finding that exaggerated tone was associated with better neural tracking of infant-guided speech—identifying ‘pitch instability’ as an audio feature important for directing attention when children.

While the exaggerated and unsteady pitch lines of children’s directed speech have well established as a feature that children love, this new research shows that they also help indicate if someone is hearing speech or a song.

Pitch stabilization is a feature that “might signal to the listener ‘Oh that sounds like someone’s singing,’” says Der Nederlanden, and bass instability for babies can indicate that they hear speech rather than playing with the sounds in the song.

In an online experiment, der Nederlanden and her colleagues asked children and adults to describe how music and language differ.

“This has given me a rich data set that tells me a lot about how people think phonetically different music and language and also in terms of how the functional roles of music and language differ in our everyday lives,” she explains.

“For vocal differences, children and adults described features such as tempo, pitch and tempo as important for distinguishing between speech and song.”

In future work, der Nederlanden hopes to move toward more natural settings, including using portable EEG to test music and language processing outside the lab.

“I think the girl sitting in the orchestra pit, talking about music and emotion, would be very excited to know that she is still asking questions about music and finding results that would have answered her questions over 20 years ago!”

Define the predictive code for music

Guilhem Marion of the Ecole Normale Supérieure has two passions that drive his research: music and computer science. He combined these interests to create new computational models of music that help researchers understand how the brain perceives music through “predictive coding,” similar to how people predict patterns in language.

“Predictive coding theory explains how the brain tries to anticipate the next tone while listening to music, which is exactly what computational models of music do to generate new music,” he explains. Marion uses these models to better understand how culture affects musical perception, by engaging knowledge based on individual environments and knowledge.

see also

In new work done with Giovanni Di Liberto and colleagues, Marion recorded the EEG activity of 21 professional musicians who were either listening or visualizing in their minds four Bach choral pieces.

In one study, they were able to quantify the amount of surprise for each note, using a computational model based on a large database of Western music. This surprise was a “cultural label for music processing,” says Marion, showing how closely tones are expected based on a person’s original musical environment.

“Our study demonstrated for the first time the average EEG response to imagined musical notes and showed that it correlates with musical surprise computed using a statistical model of music,” says Marion.

“This work has broad implications in musical perception but more generally in cognitive neuroscience, as it will inform the way the human brain learns a new language or other structures that will later shape its perception of the world.”

Such computing-based work enables a new type of study of musical perception that balances good experimental control with environmental validity, Zhang says, something that challenges the complexity involved in music and speech sounds.

This shows a woman wearing headphones
While the exaggerated and unsteady pitch lines of children’s directed speech have well established as a feature that children love, this new research shows that they also help indicate if someone is hearing speech or a song. The image is in the public domain

“It often makes sounds unnatural if everything is well controlled for your experimental purpose or its natural properties of speech or music are maintained, but then it becomes difficult to compare the sounds between experimental conditions,” he explains.

Marion and Di Liberto’s pioneering approach enables researchers to investigate, and even isolate, neural activities while listening to continuous natural speech or a musical recording.

Chang, who has been playing the violin since he was eight, is excited to see the progress that has been made in studies of musical perception just in the past decade. “When I started my PhD in 2013, only a few labs in the world were focused on music,” he says.

“But there are now many excellent junior researchers and even well-established senior researchers from other fields, such as speech, around the world who are beginning to participate in or even devote themselves to cognitive neuroscience research in music.”

Understanding the relationship between music and language “can help us explore fundamental questions of human cognition, such as why humans need music and speech, and how humans communicate and interact with each other via these forms,” ​​Chang says.

“These findings are also the basis for potential applications in clinical and areas of child development, such as whether music can be used as an alternative form of verbal communication for individuals with aphasia, and how music facilitates infants learning to speak.”

About this music and news of neuroscience research

author: Lisa Representative Munoz
source: Cognitive Neuroscience Society
Contact: Lisa MB Munoz – Society for Cognitive Neuroscience
picture: The image is in the public domain

original search: The findings will be presented at the 29th annual meeting of the Society for Cognitive Neuroscience

Leave a Comment

%d bloggers like this: