Abstract
In this work, we try to advance knowledge about differences in neural processing of music between
musicians and non-musicians, both through our findings and methodologically. Music is omnipresent
in our daily lives. Music listening has been found to have many positive benefits on health, and musicbased interventions have been shown to be an effective form of alternative and complimentary therapy
to aid treatment for various disorders. Importantly, it has been suggested that an individual’s musical
background and musicality play a key role in modulating the successful acquisition of the benefits music
has to offer.
Musical training has been shown to engender structural and functional changes in the brain due
to intense repetitive sensory-motor demands. In addition to engendering enhanced sensitivity in the
processing, representation, and discrimination of sounds and music, musical training has also been
associated with transfer effects such as enhanced cognitive function in speech and language processing,
motor abilities, attention and memory, etc. Music is thus an ideal tool to study brain adaptation and
plasticity, and musicians an ideal group to study brain changes driven by experience, especially when
juxtaposed with a non-musicians group. Thus, along with understanding the neural underpinnings of
music feature processing, it is also essential to look at how musical training modulates the functionality
of the brain by investigating group level differences between musicians and non-musicians. Our work is
a step in that direction. A distinctive aspect of the work presented is the use of the naturalistic paradigm
wherein participants listen to entire pieces of music without performing any task while being scanned
and renders itself an attractive and more ecologically valid approach to studying music perception as
opposed to the hitherto controlled paradigms.
In our work, we look at the differences between musicians and non-musicians from two different
viewpoints - from a segregated viewpoint (which aims to identify local regions that are specialized for
a particular task) in our first study, and from an integrated viewpoint (which allows for identification
and characterization of interactions between brain regions that allow for integrated functioning) in our
second study. Group differences were identified in both the studies thereby revealing differences in
neural encoding as a function of musical expertise. Musical training and expertise is likely to enable
top-down analytic processing in the musicians, especially of the more complex and higher-level aspects
and features of music, leading to differences in listening and processing strategies caused predominantly
by their varied backgrounds. This resulted in lesser homogeneity at the group level for the musicians.On the other hand, the non-musicians exhibited greater within group consistencies, indicating primarily
sensory, bottom-up processing.
In our first study which takes a segregated approach, we investigate the differences between musicians and non-musicians in the encoding of acoustic features encompassing musical timbre, rhythm, and
tonality in a continuous music listening paradigm. 18 musicians and 18 non-musicians were scanned
using functional magnetic resonance imaging (fMRI) while listening to three 8-minute long musical
pieces representing different musical styles. Acoustic features corresponding to timbre, rhythm, and
tonality were computationally extracted from the stimuli and correlated with brain responses. Overall,
non-musicians exhibited broader regions of correlations, implying greater within group similarities in
musical feature processing, especially in the auditory and default mode network-related regions. Musicians demonstrated significant correlations in regions possessing adaptations to music processing due to
training, in addition to greater involvement of limbic and reward regions in response to rhythm, indicative of greater affective and analytic processing. However, as a group, they did not exhibit large regions
of consistent correlation patterns, especially in processing high-level features, which could be attributed
to differences in processing strategies arising out of their varied training and background.
For our second study which takes an integrated approach, we note that studies from a segregated
viewpoint and studies involving network analysis have indicated the presence of large-scale brain networks involved in the processing of music and have highlighted the differences between musicians and
non-musicians. However, network analysis studies of functional connectivity in the brain during music listening have thus far focused solely on static network analysis. Dynamic Functional Connectivity
(DFC) studies have lately been found useful in uncovering meaningful, time-varying functional connectivity information in both resting-state and task-based experimental settings. We examin