Files

Abstract

The human ability to perceive and understand music is remarkable. From an unstructured stream of acoustic input it creates a wide range of experiences, from psycho-acoustic effects to emotional and aesthetic responses. One such set of phenomena is the experience of structure, the perception of notes standing in musically meaningful relationships to each other and to abstract entities such as chords, voices, schemata, formal segments, motives, or themes, which are not directly represented in the stream of notes and thus must be inferred. This dissertation argues that the perception of musical structure from notes is an instance of the general principle of Bayesian perception, which states that perception is probabilistic inference to the latent causes that produce the sensory input. It first explores the fundamental relations between notes and latent entities in three case studies on modal melodies, recognition of voice-leading schemata, and harmonic ornamentation. Subsequently, it proposes a unified generative model of the note-level structure underlying Western tonal music and potentially other styles. This model is based on the elaboration of simple latent note configurations into the musical surface, maintaining vertical, horizontal, and hierarchical relations in the process. On the music-theoretical side, this model provides a language to formally express analytical intuitions and a foundation for precise definitions of traditional concepts and clarification of their relation to the musical surface. On the computational side, the model demonstrates how complex musical structures can be inferred and how the structural properties of a style can be learned using parsing and probabilistic inference. On the cognitive side, the model shows that the perception of tonal structure can linked to general Bayesian perception through a generative process. This thesis therefore constitutes a bridge between different perspectives and disciplines, and thus contributes to a unified understanding of the human capacity for music.

Details

PDF