Culture in music cognition

Culture in music cognition refers to the impact that a person's culture has on their music cognition, including their preferences, emotion recognition, and musical memory. Musical preferences are biased toward culturally familiar musical traditions beginning in infancy, and adults' classification of the emotion of a musical piece depends on both culturally specific and universal structural features.[1][2][3] Additionally, individuals' musical memory abilities are greater for culturally familiar music than for culturally unfamiliar music.[4][5] The sum of these effects makes culture a powerful influence in music cognition.

Preferences

Effect of culture

Culturally bound preferences and familiarity for music begin in infancy and continue through adolescence and adulthood.[1][6] People tend to prefer and remember music from their own cultural tradition.[1][3]

Familiarity for culturally regular meter styles is already in place for young infants of only a few months' age.[1] The looking times of 4- to 8-month old Western infants indicate that they prefer Western meter in music, while Turkish infants of the same age prefer both Turkish and Western meters (Western meters not being completely unfamiliar in Turkish culture). Both groups preferred either meter when compared with arbitrary meter.[1]

In addition to influencing preference for meter, culture affects people's ability to correctly identify music styles. Adolescents from Singapore and the UK rated familiarity and preference for excerpts of Chinese, Malay, and Indian music styles.[6] Neither group demonstrated a preference for the Indian music samples, although the Singaporean teenagers recognized them. Participants from Singapore showed higher preference for and ability to recognize the Chinese and Malay samples; UK participants showed little preference or recognition for any of the music samples, as those types of music are not present in their native culture.[6]

Effect of musical experience

An individual's musical experience may affect how they formulate preferences for music from their own culture and other cultures.[7] American and Japanese individuals (non-music majors) both indicated preference for Western music, but Japanese individuals were more receptive to Eastern music. Among the participants, there was one group with little musical experience and one group that had received supplemental musical experience in their lifetimes. Although both American and Japanese participants disliked formal Eastern styles of music and preferred Western styles of music, participants with greater musical experience showed a wider range of preference responses not specific to their own culture.[7]

Dual cultures

Bimusicalism is a phenomenon in which people well-versed and familiar with music from two different cultures exhibit dual sensitivity to both genres of music.[8] In a study conducted with participants familiar with Western, Indian, and both Western and Indian music, the bimusical participants (exposed to both Indian and Western styles) showed no bias for either music style in recognition tasks and did not indicate that one style of music was more tense than the other. In contrast, the Western and Indian participants more successfully recognized music from their own culture and felt the other culture's music was more tense on the whole. These results indicate that everyday exposure to music from both cultures can result in cognitive sensitivity to music styles from those cultures.[8]

Bilingualism typically confers specific preferences for the language of lyrics in a song.[9] When monolingual (English-speaking) and bilingual (Spanish- and English-speaking) sixth graders listened to the same song played in an instrumental, English, or Spanish version, ratings of preference showed that bilingual students preferred the Spanish version, while monolingual students more often preferred the instrumental version; the children's self-reported distraction was the same for all excerpts. Spanish (bilingual) speakers also identified most closely with the Spanish song.[9] Thus, the language of lyrics interacts with a listener's culture and language abilities to affect preferences.

Emotion recognition

The cue-redundancy model of emotion recognition in music differentiates between universal, structural auditory cues and culturally bound, learned auditory cues (see schematic below).[2][3]

Psychophysical cues

Structural cues that span all musical traditions include dimensions such as pace (tempo), loudness, and timbre.[10] Fast tempo, for example, is typically associated with happiness, regardless of a listener's cultural background.

Culturally bound cues

Culture-specific cues rely on knowledge of the conventions in a particular musical tradition.[2][11] Ethnomusicologists have said that there are certain situations in which a certain song would be sung in different cultures. These times are marked by cultural cues and by the people of that culture.[12] A particular timbre may be interpreted to reflect one emotion by Western listeners and another emotion by Eastern listeners.[3][13] There could be other culturally bound cues as well, for example, rock n' roll music is usually identified to be a rebellious type of music associated with teenagers and the music reflects their ideals and beliefs that their culture believes.[14]

Cue-redundancy model

Schematic of the cue-redundancy model

According to the cue-redundancy model, individuals exposed to music from their own cultural tradition utilize both psychophysical and culturally bound cues in identifying emotionality.[10] Conversely, perception of intended emotion in unfamiliar music relies solely on universal, psychophysical properties.[2] Japanese listeners accurately categorize angry, joyful, and happy musical excerpts from familiar traditions (Japanese and Western samples) and relatively unfamiliar traditions (Hindustani).[2] Simple, fast melodies receive joyful ratings from these participants; simple, slow samples receive sad ratings, and loud, complex excerpts are perceived as angry.[2] Strong relationships between emotional judgments and structural acoustic cues suggest the importance of universal musical properties in categorizing unfamiliar music.[2][3]

When both Korean and American participants judged the intended emotion of Korean folksongs, the American group's identification of happy and sad songs was equivalent to levels observed for Korean listeners.[10] Surprisingly, Americans exhibited greater accuracy in anger assessments than the Korean group. The latter result implies cultural differences in anger perception occur independently of familiarity, while the similarity of American and Korean happy and sad judgments indicates the role of universal auditory cues in emotional perception.[10]

Categorization of unfamiliar music varies with intended emotion.[2][13] Timbre mediates Western listeners' recognition of angry and peaceful Hindustani songs.[13] Flute timbre supports the detection of peace, whereas string timbre aids anger identification. Happy and sad assessments, on the other hand, rely primarily on relatively "low-level" structural information such as tempo. Both low-level cues (e.g., slow tempo) and timbre aid in the detection of peaceful music, but only timbre cued anger recognition.[13] Communication of peace, therefore, takes place at multiple structural levels, while anger seems to be conveyed nearly exclusively by timbre. Similarities between aggressive vocalizations and angry music (e.g., roughness) may contribute to the salience of timbre in anger assessments.[15]

Stereotype theory of emotion in music

Model for STEM

The stereotype theory of emotion in music (STEM) suggests that cultural stereotyping may affect emotion perceived in music. STEM argues that for some listeners with low expertise, emotion perception in music is based on stereotyped associations held by the listener about the encoding culture of the music (i.e., the culture representative of a particular music genre, such as Brazilian culture encoded in Bossa Nova music).[16] STEM is an extension of the cue-redundancy model because in addition to arguing for two sources of emotion, some cultural cues can now be specifically explained in terms of stereotyping. Particularly, STEM provides more specific predictions, namely that emotion in music is dependent to some extent on the cultural stereotyping of the music genre being perceived.

Complexity

Because musical complexity is a psychophysical dimension, the cue-redundancy model predicts that complexity is perceived independently of experience. However, South African and Finnish listeners assign different complexity ratings to identical African folk songs.[17] Thus, the cue-redundancy model may be overly simplistic in its distinctions between structural feature detection and cultural learning, at least in the case of complexity.

Repetition

When listening to music from within one's own cultural tradition, repetition plays a key role in emotion judgments. American listeners who hear classical or jazz excerpts multiple times rate the elicited and conveyed emotion of the pieces as higher relative to participants who hear the pieces once.[18]

Methodological limitations

Methodological limitations of previous studies preclude a complete understanding of the roles of psychophysical cues in emotion recognition. Divergent mode and tone cues elicit "mixed affect", demonstrating the potential for mixed emotional percepts.[19] Use of dichotomous scales (e.g., simple happy/sad ratings) may mask this phenomenon, as these tasks require participants to report a single component of a multidimensional affective experience.

Memory

Enculturation is a powerful influence on music memory. Both long-term and working memory systems are critically involved in the appreciation and comprehension of music. Long-term memory enables the listener to develop musical expectation based on previous experience while working memory is necessary to relate pitches to one another in a phrase, between phrases, and throughout a piece.[20]

Neuroscience

A diagram showing the locations of the lobes of the brain

Neuroscientific evidence suggests that memory for music is, at least in part, special and distinct from other forms of memory.[21] The neural processes of music memory retrieval share much with the neural processes of verbal memory retrieval, as indicated by functional magnetic resonance imaging studies comparing the brain areas activated during each task.[5] Both musical and verbal memory retrieval activate the left inferior frontal cortex, which is thought to be involved in executive function, especially executive function of verbal retrieval, and the posterior middle temporal cortex, which is thought to be involved in semantic retrieval.[5][22][23] However, musical semantic retrieval also bilaterally activates the superior temporal gyri containing the primary auditory cortex.[5]

Effect of culture

Memory for music

A diagram showing the locations of the brain's gyri

Despite the universality of music, enculturation has a pronounced effect on individuals' memory for music. Evidence suggests that people develop their cognitive understanding of music from their cultures.[4] People are best at recognizing and remembering music in the style of their native culture, and their music recognition and memory is better for music from familiar but nonnative cultures than it is for music from unfamiliar cultures.[4] Part of the difficulty in remembering culturally unfamiliar music may arise from the use of different neural processes when listening to familiar and unfamiliar music. For instance, brain areas involved in attention, including the right angular gyrus and middle frontal gyrus, show increased activity when listening to culturally unfamiliar music compared to novel but culturally familiar music.[20]

Development

Enculturation affects music memory in early childhood before a child's cognitive schemata for music is fully formed, perhaps beginning at as early as one year of age.[24][25] Like adults, children are also better able to remember novel music from their native culture than from unfamiliar ones, although they are less capable than adults at remembering more complex music.[24]

Children's developing music cognition may be influenced by the language of their native culture.[26] For instance, children in English-speaking cultures develop the ability to identify pitches from familiar songs at 9 or 10 years old, while Japanese children develop the same ability at age 5 or 6.[26] This difference may be due to the Japanese language's use of pitch accents, which encourages better pitch discrimination at an early age, rather than the stress accents upon which English relies.[26]

Musical expectations

Enculturation also biases listeners' expectations such that they expect to hear tones that correspond to culturally familiar modal traditions.[27] For example, Western participants presented with a series of pitches followed by a test tone not present in the original series were more likely to mistakenly indicate that the test tone was originally present if the tone was derived from a Western scale than if it was derived from a culturally unfamiliar scale.[27] Recent research indicates that deviations from expectations in music may prompt out-group derogation.[28]

Limits of enculturation

Despite the powerful effects of music enculturation, evidence indicates that cognitive comprehension of and affinity for different cultural modalities is somewhat plastic. One long-term instance of plasticity is bimusicalism, a musical phenomenon akin to bilingualism. Bimusical individuals frequently listen to music from two cultures and do not demonstrate the biases in recognition memory and perceptions of tension displayed by individuals whose listening experience is limited to one musical tradition.[8]

Other evidence suggests that some changes in music appreciation and comprehension can occur over a short period of time. For instance, after half an hour of passive exposure to original melodies using familiar Western pitches in an unfamiliar musical grammar or harmonic structure (the Bohlen–Pierce scale), Western participants demonstrated increased recognition memory and greater affinity for melodies in this grammar.[29] This suggests that even very brief exposure to unfamiliar music can rapidly affect music perception and memory.

See also

References

  1. ^ a b c d e Soley, G.; Hannon, E. E. (2010). "Infants prefer the musical meter of their own culture: A cross-cultural comparison". Developmental Psychology. 46 (1): 286–292. doi:10.1037/a0017555. PMID 20053025. S2CID 2868086.
  2. ^ a b c d e f g h Balkwill, L.; Thompson, W. F.; Matsunaga, R. (2004). "Recognition of emotion in Japanese, Western, and Hindustani music by Japanese listeners". Japanese Psychological Research. 46 (4): 337–349. doi:10.1111/j.1468-5584.2004.00265.x.
  3. ^ a b c d e Thompson, William Forde & Balkwill, Laura-Lee (2010). "Chapter 27: Cross-cultural similarities and differences" (PDF). In Juslin, Patrik & Sloboda, John (eds.). Handbook of Music and Emotion: Theory, Research, Applications. Oxford University Press. pp. 755–788. ISBN 978-0-19-960496-8.
  4. ^ a b c Demorest, S. M.; Morrison, S. J.; Beken, M. N.; Jungbluth, D. (2008). "Lost in translation: An enculturation effect in music memory performance". Music Perception. 25 (3): 213–223. doi:10.1525/mp.2008.25.3.213.
  5. ^ a b c d Groussard, M.; Rauchs, G.; Landeau, B.; Viader, F.; Desgranges, B.; Eustache, F.; Platel, H. (2010). "The neural substrates of musical memory revealed by fMRI and two semantic tasks" (PDF). NeuroImage. 53 (4): 1301–1309. doi:10.1016/j.neuroimage.2010.07.013. PMID 20627131. S2CID 8955075.
  6. ^ a b c Timothy Teo; David J. Hargreaves; June Lee (2008). "Musical Preference, Identification, and Familiarity: A Multicultural Comparison of Secondary Students From Singapore and the United Kingdom". Journal of Research In(slow Neck?) Music Education. 56: 18–32. doi:10.1177/0022429408322953. S2CID 145677868.
  7. ^ a b Darrow, A.; Haack, P.; Kuribayashi, F. (1987). "American Nonmusic Majors Descriptors and Preferences for Eastern and Western Musics by Japanese and". Journal of Research in Music Education. 35 (4): 237–248. doi:10.2307/3345076. JSTOR 3345076. S2CID 144377335.
  8. ^ a b c Wong, P. C. M.; Roy, A. K.; Margulis, E. H. (2009). "Bimusicalism: The Implicit Dual Enculturation of Cognitive and Affective Systems". Music Perception. 27 (2): 81–88. doi:10.1525/MP.2009.27.2.81. PMC 2907111. PMID 20657798.
  9. ^ a b Abril, C.; Flowers, P. (October 2007). "Attention, preference, and identity in music listening by middle school students of different linguistic backgrounds". Journal of Research in Music Education. 55 (3): 204–219. doi:10.1177/002242940705500303. S2CID 143679437.
  10. ^ a b c d Kwoun, S. (2009). "An examination of cue redundancy theory in cross-cultural decoding of emotions in music". Journal of Music Therapy. 46 (3): 217–237. CiteSeerX 10.1.1.1027.2674. doi:10.1093/jmt/46.3.217. PMID 19757877.
  11. ^ Susino, M.; Schubert, E. (2020). "Musical emotions in the absence of music: A cross-cultural investigation of emotion communication in music by extra-musical cues". PLOS ONE. 15 (11): e0241196. Bibcode:2020PLoSO..1541196S. doi:10.1371/journal.pone.0241196. PMC 7673536. PMID 33206664.
  12. ^ "The Cultural Connection". UUA.org. 10 December 2011.
  13. ^ a b c d Balkwill, L.; Thompson, W. F. (1999). "A cross-cultural investigation of the perception of emotion in music: Psychophysical and cultural cues". Music Perception. 17 (1): 43–64. doi:10.2307/40285811. JSTOR 40285811. S2CID 12151228.
  14. ^ Lüdemann, Winfried. "Why culture, not race, determines tastes in music". The Conversation.
  15. ^ Tsai, Chen-Gia; Wang, Li-Ching; Wang, Shwu-Fen; Shau, Yio-Wha; Hsiao, Tzu-Yu; Wolfgang Auhagen (2010). "Aggressiveness of the growl-like timbre: Acoustic characteristics, musical implications, and biomechanical mechanisms". Music Perception. 27 (3): 209–222. doi:10.1525/mp.2010.27.3.209. S2CID 144470585.
  16. ^ Susino, M.; Schubert, E. (2018). "Cultural stereotyping of emotional responses to music genre". Psychology of Music. 47 (3): 342–357. doi:10.1177/0305735618755886. S2CID 149251044.
  17. ^ Eerola, T.; Himberg, T.; Toiviainen, P.; Louhivuori, J. (2006). "Perceived complexity of western and African melodies by western and African listeners". Psychology of Music. 34 (3): 337–371. doi:10.1177/0305735606064842. S2CID 145067190.
  18. ^ Ali, S. O.; Peynircioglu, Z. F. (2011). "Intensity of emotions conveyed and elicited by familiar and unfamiliar music". Music Perception. 27 (3): 177–182. doi:10.1525/MP.2010.27.3.177.
  19. ^ Hunter, P. G.; Schellenberg, G.; Schimmack, U. (2008). "Mixed affective responses to music with conflicting cues". Cognition and Emotion. 22 (2): 327–352. doi:10.1080/02699930701438145. S2CID 144615030.
  20. ^ a b Nan, Y.; Knosche, T. R.; Zysset, S.; Friederici, A. D. (2008). "Cross-cultural music phrase processing: An fMRI study". Human Brain Mapping. 29 (3): 312–328. doi:10.1002/hbm.20390. PMC 6871102. PMID 17497646.
  21. ^ Schulkind, M. D.; DallaBella, S.; Kraus, N.; Overy, K.; Pantey, C.; Snyder, J. S.; Tervaniemi, M.; Tillman, M.; Schlaug, G. (2009). "Is Memory for Music Special?". Annals of the New York Academy of Sciences. 1169 (1): 216–224. Bibcode:2009NYASA1169..216S. doi:10.1111/j.1749-6632.2009.04546.x. PMID 19673785. S2CID 9061292.
  22. ^ Hirshorn, E. A.; Thompson-Schill, S. L. (2006). "Role of the left inferior frontal gyrus in covert word retrieval: Neural correlates of switching during verbal fluency". Neuropsychologia. 44 (12): 2547–2557. doi:10.1016/j.neuropsychologia.2006.03.035. PMID 16725162. S2CID 10259662.
  23. ^ Martin, A.; Chao, L. L. (2001). "Semantic memory and the brain: structure and processes". Current Opinion in Neurobiology. 11 (2): 194–201. doi:10.1016/S0959-4388(00)00196-3. PMID 11301239. S2CID 3700874.
  24. ^ a b Morrison, S. J.; Demorest, S. M.; Stambaugh, L. A. (2008). "Enculturation effects in music cognition: The role of age and music complexity". Journal of Research in Music Education. 56 (2): 118–129. doi:10.1177/0022429408322854. S2CID 146203324.
  25. ^ Morrison, S. J.; Demorest, S. M.; Chiao, J. Y. (2009). Cultural constraints on music perception and cognition. Progress in Brain Research. Vol. 178. pp. 67–77. doi:10.1016/S0079-6123(09)17805-6. ISBN 9780444533616. PMID 19874962.
  26. ^ a b c Trehub, S. E.; Schellenberg, E. G.; Nakata, T. (2008). "Cross-cultural perspectives on pitch memory". Journal of Experimental Child Psychology. 100 (1): 40–52. doi:10.1016/j.jecp.2008.01.007. PMID 18325531.
  27. ^ a b Curtis, M. E.; Bharucha, J. J. (2009). "Memory and musical expectation for tones in cultural context". Music Perception. 26 (4): 365–375. doi:10.1525/MP.2009.26.4.365.
  28. ^ Maher, Van Tilburg & Van den Tol, 2013
  29. ^ Loui, P.; Wessel, D. L.; Kam, C. L. H. (2010). "Humans rapidly learn grammatical structure in a new musical scale". Music Perception. 27 (5): 377–388. doi:10.1525/MP.2010.27.5.377. PMC 2927013. PMID 20740059.