Have you ever heard a snippet of a song and immediately knew what was coming next? Or picked up the rhythm of the chorus after just a few notes? A new study from the Centre for Music in the Brain at Aarhus University and the Centre for Happiness and Human Flourishing at the University of Oxford has found that our brains process music through a specific hierarchical activation of several areas. Nature Communicationsprovides new insights into the neural mechanisms underlying the ability to predict and identify familiar melodies.
While previous studies have established the hierarchical structure of auditory perception, most of the work has focused on basic auditory stimuli and automatic predictive processes. However, little is known about how this information is integrated with complex cognitive functions such as conscious recognition and prediction of sequences over time. By investigating these mechanisms, the researchers aimed to gain new insights into how the brain processes complex auditory tasks.
“My interest in this subject began during my interdisciplinary education. As a child, I was passionate about both science and football, but eventually I dedicated myself to studying classical guitar in depth. Between the ages of 18 and 22, I performed in several concerts and taught guitar. However, I realized that my childhood passion for science was calling me back,” said study author Leonardo Bonetti.Leonardo Bo92), Associate Professor at Aarhus University and Oxford University.
“I initially studied psychology and then moved to neuroscience, where I became particularly interested in analytical methods. During my research, I discovered that music can be a powerful tool to investigate certain functions of the brain that are difficult to understand with non-musical stimuli. This is because music consists of a series of hierarchical sounds sequenced in time, making it a perfect means to examine how the brain consciously processes information over a period of time.”
The study involved 83 participants aged between 19 and 63 years old. All participants had normal hearing and most were college educated. Participants were first listened to a short piece of music, the first four bars of Johann Sebastian Bach’s Prelude No. 2 in C minor, BWV 847. Participants were asked to listen to the piece twice and memorize it.
Following this memory phase, participants underwent an auditory recognition task and brain activity was recorded using magnetoencephalography (MEG), a non-invasive imaging technique that captures magnetic fields generated by neural activity, providing precise temporal and spatial resolution.
The recognition task consisted of 135 five-note musical sequences, some of which were identical to the original piece, but others were systematically varied. These variations were introduced at different points in the sequence to observe how the brain responded to changes in a familiar pattern.
Bonetti and his colleagues found that when subjects recognized the original memorized sequence, brain activity followed a specific hierarchical pattern, starting from the auditory cortex, the region responsible for processing basic sound information, and progressing to the hippocampus and cingulate gyrus, regions associated with memory and cognitive evaluation.
When variation was introduced to the sequence, the brain generated prediction errors that started in the auditory cortex and spread to the hippocampus, anterior cingulate cortex, and ventromedial prefrontal cortex, with the anterior cingulate cortex and ventromedial prefrontal cortex showing the strongest response when variation was introduced.
The study also revealed a consistent brain hierarchical structure characterized by feedforward and feedback connections: feedforward connections from the auditory cortex to the hippocampus and cingulate gyrus were observed, along with simultaneous feedback connections in the opposite direction.
This hierarchical structure was consistent for both previously memorized and altered sequences, but the strength and timing of the brain responses changed, indicating that while the overall structure of brain processing remains stable, the dynamics change depending on whether the sequence is known or new.
“Our study shows that the brain processes music (and information over time) by activating several brain regions in a specific hierarchical order,” Bonetti told PsyPost. “Initially, sensory regions such as the auditory cortex process basic sound features. This information is then passed on to a larger network of regions that are thought to analyze sounds more deeply, including relationships between sounds (e.g., pitch). This process helps the brain determine whether a sequence of sounds is familiar or new.”
“This study not only explains how we perceive music, but also provides insight into how the brain processes and recognizes information over time. On a practical level, future research could focus on studying this phenomenon in ageing, both in healthy and pathological cases (e.g. dementia). By using music, advanced neuroscientific tools and analytical methods, we may be able to gain further understanding of dementia and memory disorders.”
Bonetti said the long-term goals of this research are to develop dementia screening tools based on the brain’s response to music and to integrate MEG and intracranial recordings to enhance data collection methods and gain a more comprehensive understanding of the mechanisms of musical memory.
“By studying aging and dementia longitudinally, we aim to develop screening tools based on brain responses during music recognition,” he explained. “These tools may be able to predict the risk of developing dementia in older adults.”
“Second, we want to expand our data collection methods. Currently, we use magnetoencephalography (MEG), which is a great non-invasive tool, but it lacks the ability to focus deep into the brain. In the future, we plan to integrate MEG with intracranial recordings from electrodes implanted in epilepsy patients. This combination will help us understand the brain mechanisms involved in musical memory across a wider range of time and spatial scales.”
“We are extremely grateful to several foundations that have supported our research, in particular the Lundbeck Foundation, the Carlsberg Foundation, the Danish National Research Foundation and Lineker College, University of Oxford,” Bonetti added.
the study, “Spatiotemporal brain hierarchy of auditory memory recognition and predictive coding” are authors L. Bonetti, G. Fernández-Rubio, F. Carlomagno, M. Dietz, D. Pantazis, P. Vuest, and ML Klingelbach.