Last month, Peabody hosted Music Mind Meaning, a conference designed to bring together leading scholars and artists to explore the relationship between music and science. The two-day event took place in Peabody's Griswold Hall and was attended by roughly 100 experts from various fields across the sciences and humanities.
The first night of the event set the exciting tone of the conference. David Huron, researcher in the field of music cognition from Ohio State University, gave a lecture titled "Emotions and Meanings in Music." Huron set out to answer the question he began with which he began his talk: How does music have meaning in our lives? There isn't a precise correspondence between sounds and meaning. But, as Huron argued, this doesn't mean that sounds can't convey meaning. Lyrics, song titles, music commentary, learned associations: these are just a few of the ways Huron noted that sounds communicate meaning.
One of the most fascinating aspects of Huron's lecture was his discussion of how signals and cues communicate. A signal, true to its name, signifies something that is biologically prepared. An example of this would be a rattlesnake's rattle.
A cue, on the other hand, is a non-purposeful feature of a thing onto which we read some type of hidden intention. (For a good discussion of the difference between signals and cues, go here.) Huron said we can distinguish between signals and cues in two ways: signals are multimodal, and not subtle—like a smile.
Why do we show our teeth when we smile? In animals, baring one's teeth is a sign of aggression. So how did a smile come to mean just the opposite in humans? To answer that question, Huron argued that the origin of a smile is acoustical: we can hear a smile. The effect of smiling is achieved by flexing your zygomatic muscles. When you flex these muscles, the sound of your voice actually changes. And thus, Huron argued, smiling is a multimodal form of communicating: it's a non-subtle visual and auditory signal. If this is true, then the opposite of smiling isn't frowning, but puckering. By puckering, Huron said we achieve the opposite auditory effect of flexing the zygomaticus.
After Huron's lecture, attendees were treated to a special music performance by Grammy-nominated jazz pianist Vijay Iyerand tenor saxophonist Gary Thomas, the chair of Jazz Studies at Peabody.
The second day of the conference began bright and early with a keynote address from Aniruddh D. Patel of Tufts University. Patel, former fellow at the Neurosciences Institute and author of Music, Language, and the Brain, delivered a lecture on the possible ways that instrumental education might enhance the brain's processing of speech.
After Patel, there were nine other lectures on neuroscience and music. Lecture topics included the evolution of "musicking," music and memory, interactive music-making, and auditory imagery. Charles Limb, professor of otolaryngology at the Johns Hopkins School of Medicine, gave a presentation titled "Music for Deaf Ears," in which he discussed the ways in which cochlear implants mediate the perception of music.
Following a roundtable discussion and closing remarks, the conference was closed with a performance by Matmos, a Baltimore-based musical duo specializing in experimental electronica.
Posted in Arts+Culture, Science+Technology
Tagged music, neuroscience