Toward a neural basis of music perception - a review and updated model

Stefan Koelsch, Stefan Koelsch

Abstract

Music perception involves acoustic analysis, auditory memory, auditory scene analysis, processing of interval relations, of musical syntax and semantics, and activation of (pre)motor representations of actions. Moreover, music perception potentially elicits emotions, thus giving rise to the modulation of emotional effector systems such as the subjective feeling system, the autonomic nervous system, the hormonal, and the immune system. Building on a previous article (Koelsch and Siebel, 2005), this review presents an updated model of music perception and its neural correlates. The article describes processes involved in music perception, and reports EEG and fMRI studies that inform about the time course of these processes, as well as about where in the brain these processes might be located.

Keywords: EEG; ERAN; brain; fMRI; music; semantics.

Figures

Figure 1
Figure 1
Neurocognitive model of music perception. ABR, auditory brainstem response; BA, Brodmann area; ERAN, early right anterior negativity; FFR, frequency-following response; LPC, late positive component; MLC, mid-latency component; MMN, mismatch negativity; RATN, right anterior-temporal negativity; RCZ, rostral cingulate zone; SMA, supplementary motor area. Italic font indicates peak latencies of scalp-recorded evoked potentials.
Figure 2
Figure 2
(A) Examples of chord functions: The chord built on the first scale tone is denoted as the tonic, the chord on the second tone as the supertonic, and the chord on the fifth tone as the dominant. (B) The dominant–tonic progression represents a regular ending of a harmonic sequence (top), the dominant–supertonic progression is less regular and unacceptable as a marker of the end of a harmonic progression (bottom sequence, the arrow indicates the less regular chord). (C) ERPs elicited in a passive listening condition by the final chords of the two sequence types shown in (B). Both sequence types were presented in pseudorandom order equiprobably in all 12 major keys. Brain responses to irregular chords clearly differ from those to regular chords (best to be seen in the black difference wave, regular subtracted from irregular chords). The first difference between the two waveforms is maximal around 200 ms after the onset of the fifth chord (ERAN, indicated by the long arrow) and taken to reflect processes of music-syntactic analysis. The ERAN is followed by an N5 taken to reflect processes of harmonic integration (short arrow). (D) Activation foci (small spheres) reported by functional imaging studies on music-syntactic processing using chord sequence paradigms (Koelsch et al., , ; Maess et al., ; Tillmann et al., 2003) and melodies (Janata et al., 2002a). Large yellow spheres show the mean coordinates of foci (averaged for each hemisphere across studies, coordinates refer to standard stereotaxic space). Reprinted from Koelsch and Siebel (2005).
Figure 3
Figure 3
Tree-structures (according to the GSM) for the sequences shown in Figure 2B, ending on a regular tonic (left), and on a supertonic (right). Dashed line: expected structure (≠: the tonic chord is expected, but a supertonic is presented); dotted lines: a possible solution for the integration of the supertonic. TR(→DR) indicates that the supertonic can still be integrated, e.g., if the expected tonic region is re-structured into dominant region. TR, tonic region; DR, dominant region; SR, subdominant region. Lower-case letters indicate chord functions (functional–structural level), Roman numerals indicate the scale-degree structure, and the bottom row indicates the surface structure in terms of the naming of the chords.
Figure 4
Figure 4
Examples of experimental stimuli used in the studies by Koelsch et al. (2005b) and Steinbeis and Koelsch (2008b). Top: examples of two chord sequences in C major, ending on a regular (upper row) and an irregular chord (lower row, the irregular chord is indicated by the arrow). Bottom: examples of the three different sentence types. Onsets of chords (presented auditorily) and words (presented visually) were synchronous. Reprinted from Steinbeis and Koelsch (2008b).
Figure 5
Figure 5
Grand-average ERPs elicited by the stimuli shown in Figure 4. Participants monitored whether the sentences were (syntactically and semantically) correct or incorrect; in addition, they had to attend to the timber of the chord sequences and to detect infrequently occurring timber deviants. ERPs were recorded on the final chords/words and are shown for the different word conditions (note that only difference waves are shown). (A) The solid (blue) difference wave shows ERAN (indicated by the arrow) and N5 elicited on syntactically and semantically correct words. The dashed (green) difference wave shows ERAN and N5, elicited when chords are presented on morpho-syntactically incorrect (but semantically correct) words. Under the latter condition, the ERAN (but not the N5) is reduced. (B) The solid (blue) difference wave is identical to the solid difference wave of (A), showing the ERAN and the N5 (indicated by the arrow) elicited on syntactically and semantically correct words. The dotted (red) difference wave shows ERAN and N5, elicited when chords are presented on semantically incorrect (but morpho-syntactically correct words). Under the latter condition, the N5 (but not the ERAN) is reduced. (C) shows the direct comparison of the difference waves in which words were syntactically incorrect (dashed, green line) or semantically incorrect (dotted, red line). These ERPs show that the ERAN is influenced by the morpho-syntactic processing of words, but not by the semantic processing of words. By contrast, the N5 is influenced by the semantic processing of words, but not by the morpho-syntactic processing of words. Data from Steinbeis and Koelsch (2008b).
Figure 6
Figure 6
Left: Examples of the four experimental conditions preceding a visually presented target word. Top panel: Prime sentence semantically related to (A), and unrelated to (B) the target word wideness. The diagram on the right shows grand-averaged ERPs elicited by target words after the presentation of semantically related (solid line) and unrelated prime sentences (dotted line), recorded from a central electrode. Unprimed target words elicited a clear N400 component in the ERP (compared to the primed target words). Bottom panel: musical semantically related to (C), and unrelated to (D) the same target word. The diagram on the right shows grand-averaged ERPs elicited by target words after the presentation of semantically related (solid line) and unrelated prime sentence (dotted line). As after the presentation of sentences, unprimed target words elicited a clear N400 component (compared to primed target words). Each trial was presented once, conditions were distributed in random order, but counterbalanced across the experiment. Note that the same target word was used for the four different conditions. Thus, condition-dependent ERP effects elicited by the target words can only be due to the different preceding contexts. Reprinted from Koelsch et al. (2004).
Figure 7
Figure 7
Data from the experiments by Daltrozzo and Schön (2009a), the left panel shows ERPs elicited by target words (primed by short musical excerpts), the right panel shows ERPs elicited by musical excerpts (primed by target words). The thick line represents ERPs elicited by unrelated stimuli, the thin line represents ERPs elicited by related stimuli. Note that the difference in N1 and P2 components is due to the fact that words were presented visually, and musical excerpts auditorily. Both meaningfully unrelated words and meaningfully unrelated musical excerpts elicited N400 potentials.

References

    1. Alain C., Woods D. L., Knight R. T. (1998). A distributed cortical network for auditory sensory memory in humans. Brain Res. 812, 23–3710.1016/S0006-8993(98)00851-8
    1. Alho K., Tervaniemi M., Huotilainen M., Lavikainen J., Tiitinen H., Ilmoniemi R. J., Knuutila J., Näätänen R. (1996). Processing of complex sounds in the human auditory cortex as revealed by magnetic brain responses. Psychophysiology 33, 369–37510.1111/j.1469-8986.1996.tb01061.x
    1. Alperson P. (1994). What is Music? An Introduction to the Philosophy of Music. Pennsylvania: State University Press
    1. Amunts K., Lenzen M., Friederici A., Schleicher A., Morosan P., Palomero-Gallagher N., Zilles K. (2010). Broca's region: novel organizational principles and multiple receptor mapping. PLoS Biol. 8, e1000489.10.1371/journal.pbio.1000489
    1. Bangert M., Altenmüller E. (2003). Mapping perception to action in piano practice: a longitudinal dc-eeg study. BMC Neurosci. 4, 26.10.1186/1471-2202-4-26
    1. Baumeister R., Leary M. (1995). The need to belong: desire for interpersonal attachments as a fundamental human motivation. Psychol. Bull. 117, 497–497
    1. Bendor D., Wang X. (2005). The neuronal representation of pitch in primate auditory cortex. Nature 436, 1161–1165
    1. Berti S., Schröger E. (2003). Working memory controls involuntary attention switching: evidence from an auditory distraction paradigm. Eur. J. Neurosci. 17, 1119–1122
    1. Besson M., Faita F. (1995). An event-related potential (ERP) study of musical expectancy: comparison of musicians with nonmusicians. J. Exp. Psychol. Hum. Percept. Perform. 21, 1278–1296
    1. Besson M., Faita F., Peretz I., Bonnel A. M., Requin J. (1998). Singing in the brain: independence of lyrics and tunes. Psychol. Sci. 9, 494–498
    1. Besson M., Macar F. (1986). Visual and auditory event-related potentials elicited by linguistic and non-linguistic incongruities. Neurosci. Lett. 63, 109–114
    1. Besson M., Schön D. (2001). “Comparison between language and music,” in The Biological Foundations of Music, Vol. 930, eds Zatorre R. J., Peretz I. (New York: The New York Academy of Sciences; ), 232–258
    1. Bigand E., Parncutt R., Lerdahl J. (1996). Perception of musical tension in short chord sequences: the influence of harmonic function, sensory dissonance, horizontal motion, and musical training. Percept. Psychophys. 58, 125–14110.3758/BF03205482
    1. Block N. (2005). Two neural correlates of consciousness. Trends Cogn. Sci. (Regul. Ed.) 9, 46–52
    1. Blood A., Zatorre R. (2001). Intensely pleasurable responses to music correlate with activity in brain regions implicated in reward and emotion. Proc. Natl. Acad. Sci. U.S.A. 98, 11818.
    1. Brattico E., Tervaniemi M., Naatanen R., Peretz I. (2006). Musical scale properties are automatically processed in the human auditory cortex. Brain Res. 1117, 162–17410.1016/j.brainres.2006.08.023
    1. Bregman A. (1994). Auditory Scene Analysis: The Perceptual Organization of Sound. Cambridge, MA: The MIT Press
    1. Budd M. (1996). Values of Art. London: Penguin Books
    1. Callan D., Tsytsarev V., Hanakawa T., Callan A., Katsuhara M., Fukuyama H., Turner R. (2006). Song and speech: brain regions involved with perception and covert production. Neuroimage 31, 1327–134210.1016/j.neuroimage.2006.01.036
    1. Cardoso S., Coimbra N., Brandão M. (1994). Defensive reactions evoked by activation of NMDA receptors in distinct sites of the inferior colliculus. Behav. Brain Res. 63, 17–24
    1. Carlyon R. (2004). How the brain separates sounds. Trends Cogn. Sci. (Regul. Ed.) 8, 465–471
    1. Conard N., Malina M., Münzel S. (2009). New flutes document the earliest musical tradition in southwestern Germany. Nature 460, 737–740
    1. Cook N. (1987). The perception of large-scale tonal closure. Music Percept. 197–205
    1. Cross I. (2008). The evolutionary nature of musical meaning. Music. Sci. 179–200
    1. Cross I., Morley I. (2008). “The evolution of music: theories, definitions and the nature of the evidence,” in Communicative Musicality: Exploring the Basis of Human Companionship, eds Malloch S., Trevarthen C. (Oxford: Oxford University Press; ), 61–82
    1. Daltrozzo J., Schön D. (2009a). Conceptual processing in music as revealed by N400 effects on words and musical targets. J. Cogn. Neurosci. 21, 1882–189210.1162/jocn.2009.21113
    1. Daltrozzo J., Schön D. (2009b). Is conceptual processing in music automatic? An electrophysiological approach. Brain Res. 1270, 88–9410.1016/j.brainres.2009.03.019
    1. Darwin C. (1997). Auditory grouping. Trends Cogn. Sci. (Regul. Ed.) 1, 327–333
    1. Darwin C. (2008). Listening to speech in the presence of other sounds. Philos. Trans. R. Soc. Lond. B Biol. Sci. 363, 1011.
    1. Davies S. (1994). Musical Meaning and Expression. Ithaca, NY: Cornell University Press
    1. Deouell L. (2007). The frontal generator of the mismatch negativity revisited. J. Psychophysiol. 21, 188
    1. Deutsch D., Henthorn T., Lapidis R. (2011). Illusory transformation from speech to song. J. Acoust. Soc. Am. 129, 2245–2252
    1. Di Pietro M., Laganaro M., Leemann B., Schnider A. (2004). Receptive amusia: temporal auditory processing deficit in a professional musician following a left temporo-parietal lesion. Neuropsychologia 42, 868–87710.1016/j.neuropsychologia.2003.12.004
    1. Doeller C., Opitz B., Mecklinger A., Krick C., Reith W., Schröger E. (2003). Prefrontal cortex involvement in preattentive auditory deviance detection: neuroimaging and electrophysiological evidence. Neuroimage 20, 1270–128210.1016/S1053-8119(03)00389-6
    1. Donchin E., Coles M. G. H. (1998). Context updating and the p300. Behav. Brain Sci. 21, 152
    1. Drost U., Rieger M., Brass M., Gunter T., Prinz W. (2005a). Action-effect coupling in pianists. Psychol. Res. 69, 233–24110.1007/s00426-004-0175-8
    1. Drost U., Rieger M., Brass M., Gunter T., Prinz W. (2005b). When hearing turns into playing: movement induction by auditory stimuli in pianists. Q. J. Exp. Psychol. A 58, 1376–1389
    1. Ethofer T., Kreifelts B., Wiethoff S., Wolf J., Grodd W., Vuilleumier P., Wildgruber D. (2009). Differential influences of emotion, task, and novelty on brain regions underlying the processing of speech melody. J. Cogn. Neurosci. 21, 1255–1268
    1. Fazio P., Cantagallo A., Craighero L., D'Ausilio A., Roy A., Pozzo T., Calzolari F., Granieri E., Fadiga L. (2009). Encoding of human action in Broca's area. Brain 132, 1980.
    1. Fedorenko E., Patel A., Casasanto D., Winawer J., Gibson E. (2009). Structural integration in language and music: evidence for a shared system. Mem. Cognit. 37, 1.
    1. Fitch W. (2006). The biology and evolution of music: a comparative perspective. Cognition 100, 173–21510.1016/j.cognition.2005.11.009
    1. Fitch W., Hauser M. (2004). Computational constraints on syntactic processing in a nonhuman primate. Science 303, 377.
    1. Fodor J., Mann V., Samuel A. (1991). “Panel discussion: the modularity of speech and language,” in Modularity and the Motor Theory of Speech Perception: Proceedings of a Conference to Honor Alvin M. Liberman, eds Mattingly Ignatius G., Studdert-Kennedy Michael. (Hillsdale, NJ: Lawrence Erlbaum Associates), 359
    1. Friederici A. (2002). Towards a neural basis of auditory sentence processing. Trends Cogn. Sci. (Regul. Ed.) 6, 78–84
    1. Friederici A. (2004). Processing local transitions versus long-distance syntactic hierarchies. Trends Cogn. Sci. (Regul. Ed.) 8, 245–247
    1. Friederici A., Bahlmann J., Heim S., Schubotz R., Anwander A. (2006). The brain differentiates human and non-human grammars: functional localization and structural connectivity. Proc. Natl. Acad. Sci. U.S.A. 103, 2458.
    1. Friedrich R., Friederici A. (2009). Mathematical logic in the human brain: syntax. PLoS ONE 4, e5599.10.1371/journal.pone.0005599
    1. Fujioka T., Trainor L., Ross B., Kakigi R., Pantev C. (2004). Musical training enhances automatic encoding of melodic contour and interval structure. J. Cogn. Neurosci. 16, 1010–1021
    1. Fujioka T., Trainor L., Ross B., Kakigi R., Pantev C. (2005). Automatic encoding of polyphonic melodies in musicians and nonmusicians. J. Cogn. Neurosci. 17, 1578–1592
    1. Geisler C. (1998). From Sound to Synapse: Physiology of the Mammalian Ear. New York, NY: Oxford University Press
    1. Giard M., Perrin F., Pernier J. (1990). Brain generators implicated in processing of auditory stimulus deviance. a topographic ERP study. Psychophysiology 27, 627–64010.1111/j.1469-8986.1990.tb03184.x
    1. Grahn J., Brett M. (2007). Rhythm and beat perception in motor areas of the brain. J. Cogn. Neurosci. 19, 893–906
    1. Grewe O., Nagel F., Kopiez R., Altenmüller E. (2007a). Emotions over time: synchronicity and development of subjective, physiological, and facial affective reactions of music. Emotion 7, 774–78810.1037/1528-3542.7.4.774
    1. Grewe O., Nagel F., Kopiez R., Altenmüller E. (2007b). Listening to music as a re-creative process: physiological, psychological, and psychoacoustical correlates of chills and strong emotions. Music Percept. 24, 297–31410.1525/mp.2007.24.3.297
    1. Grieser-Painter J., Koelsch S. (2011). Can out-of-context musical sounds convey meaning? an ERP study on the processing of meaning in music. Psychophysiology 48, 645–65510.1111/j.1469-8986.2010.01134.x
    1. Griffiths T., Warren J. (2002). The planum temporale as a computational hub. Trends Neurosci. 25, 348–35310.1016/S0166-2236(02)02191-4
    1. Griffiths T., Warren J. (2004). What is an auditory object? Nat. Rev. Neurosci. 5, 887–892
    1. Groussard M., Viader F., Hubert V., Landeau B., Abbas A., Desgranges B., Eustache F., Platel H. (2010). Musical and verbal semantic memory: two distinct neural networks? Neuroimage, 49, 2764–2773
    1. Groussard M., Viader F., Landeau B., Desgranges B., Eustache F., Platel H. (2009). Neural correlates underlying musical semantic memory. Ann. N. Y. Acad. Sci. 1169, 278–281
    1. Gunter T., Friederici A., Schriefers H. (2000). Syntactic gender and semantic expectancy: ERPs reveal early autonomy and late interaction. J. Cogn. Neurosci. 12, 556–568
    1. Hackett T. A., Kaas J. (2004). “Auditory cortex in primates: functional subdivisions and processing streams,” in The Cognitive Neurosciences, ed. Gazzaniga M. S. (Cambridge: MIT Press; ), 215–232
    1. Haueisen J., Knösche T. (2001). Involuntary motor activity in pianists evoked by music perception. J. Cogn. Neurosci. 13, 786–792
    1. Herbert C., Ethofer T., Anders S., Junghofer M., Wildgruber D., Grodd W., Kissler J. (2009). Amygdala activation during reading of emotional adjectives – an advantage for pleasant content. Soc. Cogn. Affect. Neurosci. 4, 35.
    1. Herrojo-Ruiz M., Jabusch H., Altenmüller E. (2009). Detecting wrong notes in advance: neuronal correlates of error monitoring in pianists. Cereb. Cortex 19, 2625.
    1. Herrojo-Ruiz M., Strübing F., Jabusch H. C., Altenmüller E. (2010). EEG oscillatory patterns are associated with error prediction during music performance and are altered in musician's dystonia. Neuroimage 55, 1791–180310.1016/j.neuroimage.2010.12.050
    1. Hickok G., Buchsbaum B., Humphries C., Muftuler T. (2003). Auditory-motor interaction revealed by fMRI: speech, music, and working memory in area Spt. J. Cogn. Neurosci. 15, 673–682
    1. Hucklebridge F., Lambert S., Clow A., Warburton D., Evans P., Sherwood N. (2000). Modulation of secretory immunoglobulin A in saliva; response to manipulation of mood. Biol. Psychol. 53, 25–35
    1. Huffman R., Henson O. (1990). The descending auditory pathway and acousticomotor systems: connections with the inferior colliculus. Brain Res. Rev. 15, 295–323
    1. Hyde K., Peretz I., Zatorre R. (2008). Evidence for the role of the right auditory cortex in fine pitch resolution. Neuropsychologia 46, 632–63910.1016/j.neuropsychologia.2007.09.004
    1. Janata P., Birk J., Van Horn J., Leman M., Tillmann B., Bharucha J. (2002a). The cortical topography of tonal structures underlying Western music. Science 298, 2167.10.1126/science.1076262
    1. Janata P., Tillmann B., Bharucha J. (2002b). Listening to polyphonic music recruits domain-general attention and working memory circuits. Cogn. Affect. Behav. Neurosci. 2, 121.10.3758/CABN.2.2.121
    1. Janata P., Grafton S. T. (2003). Swinging in the brain: shared neural substrates for behaviors related to sequencing and music. Nat. Neurosci. 6, 682–687
    1. Jäncke L. (2008). Music, memory and emotion. J. Biol. 7, 21.
    1. Johnson K., Nicol T., Zecker S., Kraus N. (2008). Developmental plasticity in the human auditory brainstem. J. Neurosci. 28, 4000.
    1. Johnsrude I., Penhune V., Zatorre R. (2000). Functional specificity in the right human auditory cortex for perceiving pitch direction. Brain 123, 155.
    1. Jusczyk P. (1999). How infants begin to extract words from speech. Trends Cogn. Sci. (Regul. Ed.) 3, 323–328
    1. Juslin P., Laukka P. (2003). Communication of emotions in vocal expression and music performance: different channels, same code? Psychol. Bull. 129, 770–814
    1. Kaas J., Hackett T. (2000). Subdivisions of auditory cortex and processing streams in primates. Proc. Natl. Acad. Sci. U.S.A. 97, 11793.
    1. Kaas J., Hackett T., Tramo M. (1999). Auditory processing in primate cerebral cortex. Curr. Opin. Neurobiol. 9, 164–170
    1. Kamiyama K., Katahira K., Abla D., Hori K., Okanoya K. (2010). Music playing and memory trace: evidence from event-related potentials. Neurosci. Res. 67, 334–340
    1. Karbusicky V. (1986). Grundriß der musikalischen Semantik. Darmstadt: Wissenschaftliche Buchgesellschaft
    1. Katahira K., Abla D., Masuda S., Okanoya K. (2008). Feedback-based error monitoring processes during musical performance: an ERP study. Neurosci. Res. 61, 120–128
    1. Khalfa S., Isabelle P., Jean-Pierre B., Manon R. (2002). Event-related skin conductance responses to musical emotions in humans. Neurosci. Lett. 328, 145–149
    1. Kirschner S., Tomasello M. (2009). Joint drumming: social context facilitates synchronization in preschool children. J. Exp. Child. Psychol. 102, 299–314
    1. Knösche T., Neuhaus C., Haueisen J., Alter K., Maess B., Witte O., Friederici A. D. (2005). Perception of phrase structure in music. Hum. Brain Mapp. 24, 259–27310.1002/hbm.20088
    1. Koch M., Lingenhöhl K., Pilz P. (1992). Loss of the acoustic startle response following neurotoxic lesions of the caudal pontine reticular formation: possible role of giant neurons. Neuroscience 49, 617–62510.1016/0306-4522(92)90231-P
    1. Koechlin E., Jubault T. (2006). Broca's area and the hierarchical organization of human behavior. Neuron 50, 963–97410.1016/j.neuron.2006.05.017
    1. Koelsch S. (2000). Brain and Music – A Contribution to the Investigation of Central Auditory Processing with a New Electrophysiological Approach. Leipzig: Risse
    1. Koelsch S. (2004). Spatio-temporal Aspects of Processing Syntax and Semantics in Music. Habilitation thesis, University of Leipzig
    1. Koelsch S. (2009a). Music-syntactic processing and auditory memory: similarities and differences between ERAN and MMN. Psychophysiology 46, 179–19010.1111/j.1469-8986.2008.00752.x
    1. Koelsch S. (2009b). A neuroscientific perspective on music therapy. Ann. N. Y. Acad. Sci. 1169, 374–38410.1111/j.1749-6632.2009.04592.x
    1. Koelsch S. (2010). Towards a neural basis of music-evoked emotions. Trends Cogn. Sci. (Regul. Ed.) 14, 131–137
    1. Koelsch S. (2011). Towards a neural basis of processing musical semantics. Phys. Life Rev. (in press).
    1. Koelsch S., Fritz T., Schulze K., Alsop D., Schlaug G. (2005a). Adults and children processing music: an fMRI study. Neuroimage 25, 1068–1076
    1. Koelsch S., Gunter T., Wittfoth M., Sammler D. (2005b). Interaction between syntax processing in language and in music: an ERP study. J. Cogn. Neurosci. 17, 1565–1577
    1. Koelsch S., Gunter T. C., von Cramon D. Y., Zysset S., Lohmann G., Friederici A. D. (2002). Bach speaks: a cortical “language-network” serves the processing of music. Neuroimage 17, 956–96610.1006/nimg.2002.1154
    1. Koelsch S., Jentschke S. (2010). Differences in electric brain responses to melodies and chords. J. Cogn. Neurosci. 22, 2251–2262
    1. Koelsch S., Kasper E., Sammler D., Schulze K., Gunter T. C., Friederici A. D. (2004). Music, language, and meaning: brain signatures of semantic processing. Nat. Neurosci. 7, 302–307
    1. Koelsch S., Mulder J. (2002). Electric brain responses to inappropriate harmonies during listening to expressive music. Clin. Neurophysiol. 113, 862–86910.1016/S1388-2457(02)00050-0
    1. Koelsch S., Offermanns K., Franzke P. (2010). Music in the treatment of affective disorders: an exploratory investigation of a new method for music-therapeutic research. Music Percept. 27, 307–31610.1525/mp.2010.27.4.307
    1. Koelsch S., Schröger E., Tervaniemi M. (1999). Superior pre-attentive auditory processing in musicians. Neuroreport 10, 1309.
    1. Koelsch S., Schulze K., Sammler D., Fritz T., Müller K., Gruber O. (2009). Functional architecture of verbal and tonal working memory: an FMRI study. Hum. Brain Mapp. 30, 859–87310.1002/hbm.20550
    1. Koelsch S., Siebel W. (2005). Towards a neural basis of music perception. Trends Cogn. Sci. (Regul. Ed.) 9, 578–584
    1. Koopman C., Davies S. (2001). Musical meaning in a broader perspective. J. Aesthet. Art Criticism 59, 261–27310.1111/1540-6245.00024
    1. Korzyukov O., Winkler I., Gumenyuk V., Alho K. (2003). Processing abstract auditory features in the human auditory cortex. Neuroimage 20, 2245–225810.1016/j.neuroimage.2003.08.014
    1. Kreutz G., Bongard S., Rohrmann S., Hodapp V., Grebe D. (2004). Effects of choir singing or listening on secretory immunoglobulin A, cortisol, and emotional state. J. Behav. Med. 27, 623–635
    1. Lahav A., Saltzman E., Schlaug G. (2007). Action representation of sound: audiomotor recognition network while listening to newly acquired actions. J. Neurosci., 27, 308–314
    1. Lamprea M., Cardenas F., Vianna D., Castilho V., Cruz-Morales S., Brandão M. (2002). The distribution of fos immunoreactivity in rat brain following freezing and escape responses elicited by electrical stimulation of the inferior colliculus. Brain Res. 950, 186–19410.1016/S0006-8993(02)03036-6
    1. Langner G., Ochse M. (2006). The neural basis of pitch and harmony in the auditory system. Music. Sci. 10, 185
    1. Lau E., Phillips C., Poeppel D. (2008). A cortical network for semantics: (de)constructing the n400. Nat. Rev. Neurosci. 9, 920–933
    1. LeDoux J. (2000). Emotion circuits in the brain. Annu. Rev. Neurosci. 23, 155–184
    1. Lerdahl F. (2001). Tonal Pitch Space. New York, NY: Oxford University Press
    1. Lerdahl F., Jackendoff R. (1999). A Generative Theory of Music. Cambridge: MIT
    1. Lerdahl F., Krumhansl C. (2007). Modeling tonal tension. Music Percept. 24, 329–36610.1525/mp.2007.24.4.329
    1. Levitt P., Moore R. (1979). Origin and organization of brainstem catecholamine innervation in the rat. J. Comp. Neurol. 186, 505–528
    1. Liberman A., Mattingly I. (1985). The motor theory of speech perception revised. Cognition 21, 1–3610.1016/0010-0277(85)90021-6
    1. Liebenthal E., Ellingson M., Spanaki M., Prieto T., Ropella K., Binder J. (2003). Simultaneous ERP and fMRI of the auditory cortex in a passive oddball paradigm. Neuroimage 19, 1395–140410.1016/S1053-8119(03)00228-3
    1. Liegeois-Chauvel C., Peretz I., Babaie M., Laguitton V., Chauvel P. (1998). Contribution of different cortical areas in the temporal lobes to music processing. Brain 121, 1853–186710.1093/brain/121.10.1853
    1. Longoni F., Grande M., Hendrich V., Kastrau F., Huber W. (2005). An fMRI study on conceptual, grammatical, and morpho-phonological processing. Brain Cogn. 57, 131–13410.1016/j.bandc.2004.08.032
    1. Lundqvist L., Carlsson F., Hilmersson P., Juslin P. (2009). Emotional responses to music: experience, expression, and physiology. Psychol. Music 37, 61
    1. Maess B., Jacobsen T., Schröger E., Friederici A. (2007). Localizing pre-attentive auditory memory-based comparison: magnetic mismatch negativity to pitch change. Neuroimage 37, 561–57110.1016/j.neuroimage.2007.05.040
    1. Maess B., Koelsch S., Gunter T. C., Friederici A. D. (2001). Musical syntax is processed in the area of Broca: an MEG-study. Nat. Neurosci. 4, 540–545
    1. Maidhof C., Koelsch S. (2011). Effects of selective attention on syntax processing in music and language. J. Cogn. Neurosci. [Epub ahead of print].
    1. Maidhof C., Rieger M., Prinz W., Koelsch S. (2009). Nobody is perfect: ERP effects prior to performance errors in musicians indicate fast monitoring processes. PLoS ONE 4, e5032.10.1371/journal.pone.0005032
    1. Maidhof C., Vavatzanidis N., Prinz W., Rieger M., Koelsch S. (2010). Processing expectancy violations during music performance and perception: an ERP study. J. Cogn. Neurosci. 22, 2401–2413
    1. Makuuchi M., Bahlmann J., Anwander A., Friederici A. (2009). Segregating the core computational faculty of human language from working memory. Proc. Natl. Acad. Sci. U.S.A. 106, 8362.
    1. McCraty R., Atkinson M., Rein G., Watkins A. (1996). Music enhances the effect of positive emotional states on salivary IgA. Stress Med. 12, 167–17510.1002/(SICI)1099-1700(199607)12:3<167::AID-SMI697>;2-2
    1. Menning H., Roberts L., Pantev C. (2000). Plastic changes in the auditory cortex induced by intensive frequency discrimination training. Neuroreport 11, 817.
    1. Meyer L. (1956). Emotion and Meaning in Music. Chicago: University of Chicago Press
    1. Meyer M., Alter K., Friederici A., Lohmann G., Cramon D. (2002). FMRI reveals brain regions mediating slow prosodic modulations in spoken sentences. Hum. Brain Mapp. 17, 73–8810.1002/hbm.10042
    1. Meyer M., Steinhauer K., Alter K., Friederici A., Cramon D. (2004). Brain activity varies with modulation of dynamic pitch variances in sentence melody. Brain Lang. 89, 277–28910.1016/S0093-934X(03)00350-X
    1. Miranda R., Ullman M. (2007). Double dissociation between rules and memory in music: an event-related potential study. Neuroimage 38, 331–34510.1016/j.neuroimage.2007.07.034
    1. Molholm S., Martinez A., Ritter W., Javitt D., Foxe J. (2005). The neural circuitry of pre-attentive auditory change-detection: an fMRI study of pitch and duration mismatch negativity generators. Cereb. Cortex 15, 545.
    1. Moon C., Cooper R., Fifer W. (1993). Two-day-olds prefer their native language. Infant Behav. Dev. 16, 495–500
    1. Moore B. (2008). An Introduction to the Psychology of Hearing, 5th Edn Bingley: Emerald
    1. Musacchia G., Sams M., Skoe E., Kraus N. (2007). Musicians have enhanced subcortical auditory and audiovisual processing of speech and music. Proc. Natl. Acad. Sci. U.S.A. 104, 15894.
    1. Näätänen R., Tervaniemi M., Sussman E., Paavilainen P., Winkler I. (2001). Primitive intelli-gence'in the auditory cortex. Trends Neurosci. 24, 283–28810.1016/S0166-2236(00)01790-2
    1. Nan Y., Knösche T., Friederici A. (2006). The perception of musical phrase structure: a cross-cultural ERP study. Brain Res. 1094, 179–19110.1016/j.brainres.2006.03.115
    1. Nelken I. (2004). Processing of complex stimuli and natural scenes in the auditory cortex. Curr. Opin. Neurobiol. 14, 474–480
    1. Neuhaus C., Knösche T., Friederici A. (2006). Effects of musical expertise and boundary markers on phrase perception in music. J. Cogn. Neurosci. 18, 472–493
    1. Obleser J., Meyer L., Friederici A. (2011). Dynamic assignment of neural resources in auditory comprehension of complex sentences. Neuroimage 56, 2310–2320
    1. öngür D., Price J. L. (2000). The organization of networks within the orbital and medial prefrontal cortex of rats, monkeys and humans. Cereb. Cortex 10, 206.
    1. Opitz B., Kotz S. (2011). Ventral premotor cortex lesions disrupt learning of sequential grammatical structures. Cortex. (in press).
    1. Opitz B., Rinne T., Mecklinger A., Cramon D., Schröger E. (2002). Differential contribution of frontal and temporal cortices to auditory chenage detection: fMRI and ERP results. Neuroimage 15, 167–17410.1006/nimg.2001.0970
    1. Orini M., Bailón R., Enk R., Koelsch S., Mainardi L., Laguna P. (2010). A method for continuously assessing the autonomic response to music-induced emotions through HRV analysis. Med. Biol. Eng. Comput. 48, 423–433
    1. Overy K., Molnar-Szakacs I. (2009). Being together in time: musical experience and the mirror neuron system. Music Percept. 26, 489–50410.1525/mp.2009.26.5.489
    1. Paller K. A., McCarthy G., Wood C. C. (1992). Event-related potentials elicited by deviant endings to melodies. Psychophysiology 29, 202–20610.1111/j.1469-8986.1992.tb01686.x
    1. Panksepp J., Bernatzky G. (2002). Emotional sounds and the brain: the neuro-affective foundations of musical appreciation. Behav. Processes 60, 133–15510.1016/S0376-6357(02)00080-3
    1. Pantev C., Roberts L. E., Schulz M., Engelien A., Ross B. (2001). Timbre-specific enhancement of auditory cortical representation in musicians. Neuroreport 12, 169–17410.1097/00001756-200101220-00041
    1. Parsons L. (2001). Exploring the functional neuroanatomy of music performance, perception, and comprehension. Ann. N. Y. Acad. Sci. 930, 211–231
    1. Patel A. (2003). Language, music, syntax and the brain. Nat. Neurosci. 6, 674–681
    1. Patel A. (2008). Music, Language, and the Brain. New York, NY: Oxford University Press
    1. Patel A., Balaban E. (2001). Human pitch perception is reflected in the timing of stimulus-related cortical activity. Nat. Neurosci. 4, 839–844
    1. Patel A., Gibson E., Ratner J., Besson M., Holcomb P. (1998). Processing syntactic relations in language and music: an event-related potential study. J. Cogn. Neurosci. 10, 717–733
    1. Patel A., Iversen J., Bregman M., Schulz I. (2009). Experimental evidence for synchronization to a musical beat in a nonhuman animal. Curr. Biol. 19, 827–830
    1. Patel A., Iversen J., Wassenaar M., Hagoort P. (2008). Musical syntactic processing in agrammatic Broca's aphasia. Aphasiology 22, 776–78910.1080/02687030701803804
    1. Patterson R., Uppenkamp S., Johnsrude I., Griffiths T. (2002). The processing of temporal pitch and melody information in auditory cortex. Neuron 36, 767–77610.1016/S0896-6273(02)01060-7
    1. Perani D., Saccuman M., Scifo P., Spada D., Andreolli G., Rovelli R., Baldoli C., Koelsch S. (2010). Functional specializations for music processing in the human newborn brain. Proc. Natl. Acad. Sci. U.S.A. 107, 4758.
    1. Peretz I., Brattico E., Järvenpää M., Tervaniemi M. (2009). The amusic brain: in tune, out of key, and unaware. Brain 132, 1277.
    1. Peretz I., Coltheart M. (2003). Modularity of music processing. Nat. Neurosci. 6, 688–691
    1. Peretz I., Zatorre R. (2005). Brain organization for music processing. Annu. Rev. Psychol. 56, 89–11410.1146/annurev.psych.56.091103.070225
    1. Petkov C., Kayser C., Augath M., Logothetis N. (2006). Functional imaging reveals numerous fields in the monkey auditory cortex. PLoS Biol. 4, e215.10.1371/journal.pbio.0040215
    1. Pickles J. (2008). An Introduction to the Physiology of Hearing, 3rd Edn Bingley: Emerald
    1. Piston W. (1948/1987). Harmony. New York: Norton
    1. Polich J. (2007). Updating P300: an integrative theory of P3a and P3b. Clin. Neurophysiol. 118, 2128–214810.1016/j.clinph.2007.04.019
    1. Poulin-Charronnat B., Bigand E., Koelsch S. (2006). Processing of musical syntax tonic versus subdominant: an event-related potential study. J. Cogn. Neurosci. 18, 1545–1554
    1. Quiroga Murcia C., Bongard S., Kreutz G. (2009). Emotional and neurohumoral responses to dancing tango argentino. Music Med. 1, 14
    1. Rameau J.-P. (1722). Traité de l'harmonie réduite á ses principes naturels. Paris: J. B. C. Ballard
    1. Rammsayer T., Altenmüller E. (2006). Temporal information processing in musicians and nonmusicians. Music Percept. 24, 37–4810.1525/mp.2006.24.1.37
    1. Riemann H. (1877/1971). Musikalische syntaxis: Grundriss einer harmonischen satzbildungslehre. Niederwalluf: Sändig
    1. Rilling J., Gutman D., Zeh T., Pagnoni G., Berns G., Kilts C. (2002). A neural basis for social cooperation. Neuron 35, 395–40510.1016/S0896-6273(02)00755-9
    1. Rinne T., Degerman A., Alho K. (2005). Superior temporal and inferior frontal cortices are activated by infrequent sound duration decrements: an fMRI study. Neuroimage 26, 66–7210.1016/j.neuroimage.2005.01.017
    1. Rizzolatti G., Craighero L. (2004). The mirror-neuron system. Annu. Rev. Neurosci. 27, 169–192
    1. Rohrmeier M. (2007). “A generative grammar approach to diatonic harmonic structure,” in Proceedings of the 4th Sound and Music Computing Conference, Lefkada, 97–100
    1. Rohrmeier M. (2011). A generative grammar approach to tonal harmony. J. Math. Music 5
    1. Rüsseler J., Altenmüller E., Nager W., Kohlmetz C., Münte T. (2001). Event-related brain potentials to sound omissions differ in musicians and non-musicians. Neurosci. Lett. 308, 33–36
    1. Scherer K. (2005). What are emotions? And how can they be measured? Soc. Sci. Inform. 44, 695
    1. Schmidt-Kassow M., Kotz S. (2009). Event-related brain potentials suggest a late interaction of meter and syntax in the P600. J. Cogn. Neurosci. 21, 1693–1708
    1. Schöenberg A. (1969). Structural functions of harmony. New York: Norton
    1. Schonwiesner M., Novitski N., Pakarinen S., Carlson S., Tervaniemi M., Näätänen R. (2007). Heschl's gyrus, posterior superior temporal gyrus, and mid-ventrolateral prefrontal cortex have different roles in the detection of acoustic changes. J. Neurophysiol. 97, 2075.
    1. Schulze K., Mueller K., Koelsch S. (2011a). Neural correlates of strategy use during auditory working memory in musicians and non-musicians. Eur. J. Neurosci. 33, 189–19610.1111/j.1460-9568.2010.07470.x
    1. Schulze K., Zysset S., Mueller K., Friederici A. D., Koelsch S. (2011b). Neuroarchitecture of verbal and tonal working memory in nonmusicians and musicians. Hum. Brain Mapp. 32, 771–78310.1002/hbm.21060
    1. Scott S. (2005). Auditory processing–speech, space and auditory objects. Curr. Opin. Neurobiol. 15, 197–201
    1. Shinn-Cunningham B. (2008). Object-based auditory and visual attention. Trends Cogn. Sci. (Regul. Ed.) 12, 182–18610.1016/j.tics.2008.02.003
    1. Sinex D., Guzik H., Li H., Henderson Sabes J. (2003). Responses of auditory nerve fibers to harmonic and mistuned complex tones. Hear. Res. 182, 130–139
    1. Slevc L., Rosenberg J., Patel A. (2009). Making psycholinguistics musical: self-paced reading time evidence for shared processing of linguistic and musical syntax. Psychon. Bull. Rev. 16, 374.
    1. Sloboda J. A. (1991). Music structure and emotional response: some empirical findings. Psychol. Music 19, 110–120
    1. Song J., Skoe E., Wong P., Kraus N. (2008). Plasticity in the adult human auditory brainstem following short-term linguistic training. J. Cogn. Neurosci. 20, 1892–1902
    1. Steinbeis N., Koelsch S. (2008a). Comparing the processing of music and language meaning using EEG and FMRI provides evidence for similar and distinct neural representations. PLoS ONE 3, e2226.10.1371/journal.pone.0002226
    1. Steinbeis N., Koelsch S. (2008b). Shared neural resources between music and language indicate semantic processing of musical tension-resolution patterns. Cereb. Cortex 18, 1169.
    1. Steinbeis N., Koelsch S. (2008c). Understanding the intentions behind man-made products elicits neural activity in areas dedicated to mental state attribution. Cereb. Cortex 19, 619–623
    1. Steinbeis N., Koelsch S. (2011). Affective priming effects of musical sounds on the processing of word meaning. J. Cogn. Neurosci. 23, 604–621
    1. Steinhauer K., Alter K., Friederici A. D. (1999). Brain potentials indicate immediate use of prosodic cues in natural speech processing. Nat. Neurosci. 2, 191–196
    1. Strait D., Kraus N., Skoe E., Ashley R. (2009). Musical experience and neural efficiency – effects of training on subcortical processing of vocal expressions of emotion. Eur. J. Neurosci. 29, 661–668
    1. Sussman E. (2007). A new view on the MMN and attention debate: the role of context in processing auditory events. J. Psychophysiol. 21, 164–17510.1027/0269-8803.21.34.164
    1. Tervaniemi M. (2009). Musicians – same or different? Ann. N. Y. Acad. Sci. 1169, 151–156
    1. Tervaniemi M., Castaneda A., Knoll M., Uther M. (2006a). Sound processing in amateur musicians and nonmusicians: event-related potential and behavioral indices. Neuroreport 17, 1225.
    1. Tervaniemi M., Szameitat A., Kruck S., Schroger E., Alter K., De Baene W., Friederici A. D. (2006b). From air oscillations to music and speech: functional magnetic resonance imaging evidence for fine-tuned neural networks in audition. J. Neurosci. 26, 8647.
    1. Tervaniemi M., Huotilainen M. (2003). The promises of change-related brain potentials in cognitive neuroscience of music. Ann. N. Y. Acad. Sci. 999, 29–39
    1. Tervaniemi M., Ilvonen T., Karma K., Alho K., Näätänen R. (1997). The musical brain: brain waves reveal the neurophysiological basis of musicality in human subjects. Neurosci. Lett. 226, 1–410.1016/S0375-9601(96)00906-1
    1. Tervaniemi M., Just V., Koelsch S., Widmann A., Schröger E. (2005). Pitch discrimination accuracy in musicians vs nonmusicians: an event-related potential and behavioral study. Exp. Brain Res. 161, 1–1010.1007/s00221-004-2044-5
    1. Tervaniemi M., Kruck S., De Baene W., Schröger E., Alter K., Friederici A. (2009). Top-down modulation of auditory processing: effects of sound context, musical expertise and attentional focus. Eur. J. Neurosci. 30, 1636–1642
    1. Tervaniemi M., Kujala A., Alho K., Virtanen J., Ilmoniemi R., Näätänen R. (1999). Functional specialization of the human auditory cortex in processing phonetic and musical sounds: a magne-toencephalographic (MEG) study. Neuroimage 9, 330–33610.1006/nimg.1999.0405
    1. Tervaniemi M., Medvedev S., Alho K., Pakhomov S., Roudas M., Zuijen T., Näätänen R. (2000). Lateralized automatic auditory processing of phonetic versus musical information: a PET study. Hum. Brain Mapp. 10, 74–7910.1002/(SICI)1097-0193(200006)10:2<74::AID-HBM30>;2-2
    1. Tervaniemi M., Rytkönen M., Schröger E., Ilmoniemi R., Näätänen R. (2001). Superior formation of cortical memory traces for melodic patterns in musicians. Learn. Mem. 8, 295.
    1. Tillmann B., Bharucha J., Bigand E. (2000). Implicit learning of tonality: a self-organized approach. Psychol. Rev. 107, 885–913
    1. Tillmann B., Janata P., Bharucha J. (2003). Activation of the inferior frontal cortex in musical priming. Brain Res. Cogn. Brain Res. 16, 145–161
    1. Tramo M., Shah G., Braida L. (2002). Functional role of auditory cortex in frequency processing and pitch perception. J. Neurophysiol. 87, 122.
    1. Trehub S. (2003). The developmental origins of musicality. Nat. Neurosci. 6, 669–673
    1. Van Petten C., Kutas M. (1990). Interactions between sentence context and word frequency in event-related brain potentials. Mem. Cognit. 18, 380–393
    1. Verleger R. (1990). P3-evoking wrong notes: unexpected, awaited, or arousing? Int. J. Neurosci. 55, 171–179
    1. Võ M., Conrad M., Kuchinke L., Urton K., Hofmann M., Jacobs A. (2009). The Berlin affective word list reloaded (BAWL-R). Behav. Res. Methods 41, 534–538
    1. Wallin N. L., Merker B., Brown S. (eds). (2000). The Origins of Music. Cambridge, MA: MIT Press
    1. Warren J. D., Uppenkamp S., Patterson R. D., Griffiths T. D. (2003). Separating pitch chroma and pitch height in the human brain. Proc. Natl. Acad. Sci. U.S.A. 100, 10038–10042
    1. Watanabe T., Yagishita S., Kikyo H. (2008). Memory of music: roles of right hippocampus and left inferior frontal gyrus. Neuroimage 39, 483–49110.1016/j.neuroimage.2007.08.024
    1. Whitfield I. (1980). Auditory cortex and the pitch of complex tones. J. Acoust. Soc. Am. 67, 644.
    1. Winkler I. (2007). Interpreting the mismatch negativity. J. Psychophysiol. 21, 147–16310.1027/0269-8803.21.34.147
    1. Winkler I., Denham S., Nelken I. (2009). Modeling the auditory scene: predictive regularity representations and perceptual objects. Trends Cogn. Sci. (Regul. Ed.) 13, 532–540
    1. Wittfoth M., Schröder C., Schardt D., Dengler R., Heinze H., Kotz S. (2010). On emotional conflict: interference resolution of happy and angry prosody reveals valence-specific effects. Cereb. Cortex 20, 383.
    1. Wong P., Skoe E., Russo N., Dees T., Kraus N. (2007). Musical experience shapes human brain-stem encoding of linguistic pitch patterns. Nat. Neurosci. 10, 420–422
    1. Zatorre R. (1988). Pitch perception of complex tones and human temporal-lobe function. J. Acoust. Soc. Am. 84, 566–572
    1. Zatorre R. (2001). Neural specializations for tonal processing. Ann. N. Y. Acad. Sci. 930, 193–210
    1. Zatorre R., Belin P., Penhune V. (2002). Structure and function of auditory cortex: music and speech. Trends Cogn. Sci. (Regul. Ed.) 6, 37–46
    1. Zatorre R., Evans A., Meyer E. (1994). Neural mechanisms underlying melodic perception and memory for pitch. J. Neurosci. 14, 1908.
    1. Zuijen T., Sussman E., Winkler I., Näätänen R., Tervaniemi M. (2004). Grouping of sequential sounds-an event-related potential study comparing musicians and nonmusicians. J. Cogn. Neurosci. 16, 331–338
    1. Zuijen T., Sussman E., Winkler I., Näätänen R., Tervaniemi M. (2005). Auditory organization of sound sequences by a temporal or numerical regularity–a mismatch negativity study comparing musicians and non-musicians. Brain Res. Cogn. Brain Res. 23, 270–276

Source: PubMed

3
Suscribir