Culture in music cognitionCulture in music cognition refers to the impact that a person's culture has on their music cognition, including their preferences, emotion recognition, and musical memory. Musical preferences are biased toward culturally familiar musical traditions beginning in infancy, and adults' classification of the emotion of a musical piece depends on both culturally specific and universal structural features.[1][2][3] Additionally, individuals' musical memory abilities are greater for culturally familiar music than for culturally unfamiliar music.[4][5] The sum of these effects makes culture a powerful influence in music cognition. PreferencesEffect of cultureCulturally bound preferences and familiarity for music begin in infancy and continue through adolescence and adulthood.[1][6] People tend to prefer and remember music from their own cultural tradition.[1][3] Familiarity for culturally regular meter styles is already in place for young infants of only a few months' age.[1] The looking times of 4- to 8-month old Western infants indicate that they prefer Western meter in music, while Turkish infants of the same age prefer both Turkish and Western meters (Western meters not being completely unfamiliar in Turkish culture). Both groups preferred either meter when compared with arbitrary meter.[1] In addition to influencing preference for meter, culture affects people's ability to correctly identify music styles. Adolescents from Singapore and the UK rated familiarity and preference for excerpts of Chinese, Malay, and Indian music styles.[6] Neither group demonstrated a preference for the Indian music samples, although the Singaporean teenagers recognized them. Participants from Singapore showed higher preference for and ability to recognize the Chinese and Malay samples; UK participants showed little preference or recognition for any of the music samples, as those types of music are not present in their native culture.[6] Effect of musical experienceAn individual's musical experience may affect how they formulate preferences for music from their own culture and other cultures.[7] American and Japanese individuals (non-music majors) both indicated preference for Western music, but Japanese individuals were more receptive to Eastern music. Among the participants, there was one group with little musical experience and one group that had received supplemental musical experience in their lifetimes. Although both American and Japanese participants disliked formal Eastern styles of music and preferred Western styles of music, participants with greater musical experience showed a wider range of preference responses not specific to their own culture.[7] Dual culturesBimusicalism is a phenomenon in which people well-versed and familiar with music from two different cultures exhibit dual sensitivity to both genres of music.[8] In a study conducted with participants familiar with Western, Indian, and both Western and Indian music, the bimusical participants (exposed to both Indian and Western styles) showed no bias for either music style in recognition tasks and did not indicate that one style of music was more tense than the other. In contrast, the Western and Indian participants more successfully recognized music from their own culture and felt the other culture's music was more tense on the whole. These results indicate that everyday exposure to music from both cultures can result in cognitive sensitivity to music styles from those cultures.[8] Bilingualism typically confers specific preferences for the language of lyrics in a song.[9] When monolingual (English-speaking) and bilingual (Spanish- and English-speaking) sixth graders listened to the same song played in an instrumental, English, or Spanish version, ratings of preference showed that bilingual students preferred the Spanish version, while monolingual students more often preferred the instrumental version; the children's self-reported distraction was the same for all excerpts. Spanish (bilingual) speakers also identified most closely with the Spanish song.[9] Thus, the language of lyrics interacts with a listener's culture and language abilities to affect preferences. Emotion recognitionThe cue-redundancy model of emotion recognition in music differentiates between universal, structural auditory cues and culturally bound, learned auditory cues (see schematic below).[2][3] Psychophysical cuesStructural cues that span all musical traditions include dimensions such as pace (tempo), loudness, and timbre.[10] Fast tempo, for example, is typically associated with happiness, regardless of a listener's cultural background. Culturally bound cuesCulture-specific cues rely on knowledge of the conventions in a particular musical tradition.[2][11] Ethnomusicologists have said that there are certain situations in which a certain song would be sung in different cultures. These times are marked by cultural cues and by the people of that culture.[12] A particular timbre may be interpreted to reflect one emotion by Western listeners and another emotion by Eastern listeners.[3][13] There could be other culturally bound cues as well, for example, rock n' roll music is usually identified to be a rebellious type of music associated with teenagers and the music reflects their ideals and beliefs that their culture believes.[14] Cue-redundancy modelAccording to the cue-redundancy model, individuals exposed to music from their own cultural tradition utilize both psychophysical and culturally bound cues in identifying emotionality.[10] Conversely, perception of intended emotion in unfamiliar music relies solely on universal, psychophysical properties.[2] Japanese listeners accurately categorize angry, joyful, and happy musical excerpts from familiar traditions (Japanese and Western samples) and relatively unfamiliar traditions (Hindustani).[2] Simple, fast melodies receive joyful ratings from these participants; simple, slow samples receive sad ratings, and loud, complex excerpts are perceived as angry.[2] Strong relationships between emotional judgments and structural acoustic cues suggest the importance of universal musical properties in categorizing unfamiliar music.[2][3] When both Korean and American participants judged the intended emotion of Korean folksongs, the American group's identification of happy and sad songs was equivalent to levels observed for Korean listeners.[10] Surprisingly, Americans exhibited greater accuracy in anger assessments than the Korean group. The latter result implies cultural differences in anger perception occur independently of familiarity, while the similarity of American and Korean happy and sad judgments indicates the role of universal auditory cues in emotional perception.[10] Categorization of unfamiliar music varies with intended emotion.[2][13] Timbre mediates Western listeners' recognition of angry and peaceful Hindustani songs.[13] Flute timbre supports the detection of peace, whereas string timbre aids anger identification. Happy and sad assessments, on the other hand, rely primarily on relatively "low-level" structural information such as tempo. Both low-level cues (e.g., slow tempo) and timbre aid in the detection of peaceful music, but only timbre cued anger recognition.[13] Communication of peace, therefore, takes place at multiple structural levels, while anger seems to be conveyed nearly exclusively by timbre. Similarities between aggressive vocalizations and angry music (e.g., roughness) may contribute to the salience of timbre in anger assessments.[15] Stereotype theory of emotion in musicThe stereotype theory of emotion in music (STEM) suggests that cultural stereotyping may affect emotion perceived in music. STEM argues that for some listeners with low expertise, emotion perception in music is based on stereotyped associations held by the listener about the encoding culture of the music (i.e., the culture representative of a particular music genre, such as Brazilian culture encoded in Bossa Nova music).[16] STEM is an extension of the cue-redundancy model because in addition to arguing for two sources of emotion, some cultural cues can now be specifically explained in terms of stereotyping. Particularly, STEM provides more specific predictions, namely that emotion in music is dependent to some extent on the cultural stereotyping of the music genre being perceived. ComplexityBecause musical complexity is a psychophysical dimension, the cue-redundancy model predicts that complexity is perceived independently of experience. However, South African and Finnish listeners assign different complexity ratings to identical African folk songs.[17] Thus, the cue-redundancy model may be overly simplistic in its distinctions between structural feature detection and cultural learning, at least in the case of complexity. RepetitionWhen listening to music from within one's own cultural tradition, repetition plays a key role in emotion judgments. American listeners who hear classical or jazz excerpts multiple times rate the elicited and conveyed emotion of the pieces as higher relative to participants who hear the pieces once.[18] Methodological limitationsMethodological limitations of previous studies preclude a complete understanding of the roles of psychophysical cues in emotion recognition. Divergent mode and tone cues elicit "mixed affect", demonstrating the potential for mixed emotional percepts.[19] Use of dichotomous scales (e.g., simple happy/sad ratings) may mask this phenomenon, as these tasks require participants to report a single component of a multidimensional affective experience. MemoryEnculturation is a powerful influence on music memory. Both long-term and working memory systems are critically involved in the appreciation and comprehension of music. Long-term memory enables the listener to develop musical expectation based on previous experience while working memory is necessary to relate pitches to one another in a phrase, between phrases, and throughout a piece.[20] NeuroscienceNeuroscientific evidence suggests that memory for music is, at least in part, special and distinct from other forms of memory.[21] The neural processes of music memory retrieval share much with the neural processes of verbal memory retrieval, as indicated by functional magnetic resonance imaging studies comparing the brain areas activated during each task.[5] Both musical and verbal memory retrieval activate the left inferior frontal cortex, which is thought to be involved in executive function, especially executive function of verbal retrieval, and the posterior middle temporal cortex, which is thought to be involved in semantic retrieval.[5][22][23] However, musical semantic retrieval also bilaterally activates the superior temporal gyri containing the primary auditory cortex.[5] Effect of cultureMemory for musicDespite the universality of music, enculturation has a pronounced effect on individuals' memory for music. Evidence suggests that people develop their cognitive understanding of music from their cultures.[4] People are best at recognizing and remembering music in the style of their native culture, and their music recognition and memory is better for music from familiar but nonnative cultures than it is for music from unfamiliar cultures.[4] Part of the difficulty in remembering culturally unfamiliar music may arise from the use of different neural processes when listening to familiar and unfamiliar music. For instance, brain areas involved in attention, including the right angular gyrus and middle frontal gyrus, show increased activity when listening to culturally unfamiliar music compared to novel but culturally familiar music.[20] DevelopmentEnculturation affects music memory in early childhood before a child's cognitive schemata for music is fully formed, perhaps beginning at as early as one year of age.[24][25] Like adults, children are also better able to remember novel music from their native culture than from unfamiliar ones, although they are less capable than adults at remembering more complex music.[24] Children's developing music cognition may be influenced by the language of their native culture.[26] For instance, children in English-speaking cultures develop the ability to identify pitches from familiar songs at 9 or 10 years old, while Japanese children develop the same ability at age 5 or 6.[26] This difference may be due to the Japanese language's use of pitch accents, which encourages better pitch discrimination at an early age, rather than the stress accents upon which English relies.[26] Musical expectationsEnculturation also biases listeners' expectations such that they expect to hear tones that correspond to culturally familiar modal traditions.[27] For example, Western participants presented with a series of pitches followed by a test tone not present in the original series were more likely to mistakenly indicate that the test tone was originally present if the tone was derived from a Western scale than if it was derived from a culturally unfamiliar scale.[27] Recent research indicates that deviations from expectations in music may prompt out-group derogation.[28] Limits of enculturationDespite the powerful effects of music enculturation, evidence indicates that cognitive comprehension of and affinity for different cultural modalities is somewhat plastic. One long-term instance of plasticity is bimusicalism, a musical phenomenon akin to bilingualism. Bimusical individuals frequently listen to music from two cultures and do not demonstrate the biases in recognition memory and perceptions of tension displayed by individuals whose listening experience is limited to one musical tradition.[8] Other evidence suggests that some changes in music appreciation and comprehension can occur over a short period of time. For instance, after half an hour of passive exposure to original melodies using familiar Western pitches in an unfamiliar musical grammar or harmonic structure (the Bohlen–Pierce scale), Western participants demonstrated increased recognition memory and greater affinity for melodies in this grammar.[29] This suggests that even very brief exposure to unfamiliar music can rapidly affect music perception and memory. See also
References
|