Structural characterization of musical harmonies
- URL: http://arxiv.org/abs/1912.12362v1
- Date: Fri, 27 Dec 2019 23:15:49 GMT
- Title: Structural characterization of musical harmonies
- Authors: Maria Rojo Gonz\'alez and Simone Santini
- Abstract summary: We use a hybrid method in which an evidence-gathering numerical method detects modulation and then, based on the detected tonalities, a non-ambiguous grammar can be used for analyzing the structure of each tonal component.
Experiments with music from the XVII and XVIII centuries show that we can detect the precise point of modulation with an error of at most two chords in almost 97% of the cases.
- Score: 4.416484585765029
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Understanding the structural characteristics of harmony is essential for an
effective use of music as a communication medium. Of the three expressive axes
of music (melody, rhythm, harmony), harmony is the foundation on which the
emotional content is built, and its understanding is important in areas such as
multimedia and affective computing. The common tool for studying this kind of
structure in computing science is the formal grammar but, in the case of music,
grammars run into problems due to the ambiguous nature of some of the concepts
defined in music theory. In this paper, we consider one of such constructs:
modulation, that is, the change of key in the middle of a musical piece, an
important tool used by many authors to enhance the capacity of music to express
emotions. We develop a hybrid method in which an evidence-gathering numerical
method detects modulation and then, based on the detected tonalities, a
non-ambiguous grammar can be used for analyzing the structure of each tonal
component. Experiments with music from the XVII and XVIII centuries show that
we can detect the precise point of modulation with an error of at most two
chords in almost 97% of the cases. Finally, we show examples of complete
modulation and structural analysis of musical harmonies.
Related papers
- A Survey of Foundation Models for Music Understanding [60.83532699497597]
This work is one of the early reviews of the intersection of AI techniques and music understanding.
We investigated, analyzed, and tested recent large-scale music foundation models in respect of their music comprehension abilities.
arXiv Detail & Related papers (2024-09-15T03:34:14Z) - Towards Explainable and Interpretable Musical Difficulty Estimation: A Parameter-efficient Approach [49.2787113554916]
Estimating music piece difficulty is important for organizing educational music collections.
Our work employs explainable descriptors for difficulty estimation in symbolic music representations.
Our approach, evaluated in piano repertoire categorized in 9 classes, achieved 41.4% accuracy independently, with a mean squared error (MSE) of 1.7.
arXiv Detail & Related papers (2024-08-01T11:23:42Z) - Emotion-Driven Melody Harmonization via Melodic Variation and Functional Representation [16.790582113573453]
Emotion-driven melody aims to generate diverse harmonies for a single melody to convey desired emotions.
Previous research found it hard to alter the perceived emotional valence of lead sheets only by harmonizing the same melody with different chords.
In this paper, we propose a novel functional representation for symbolic music.
arXiv Detail & Related papers (2024-07-29T17:05:12Z) - MeLFusion: Synthesizing Music from Image and Language Cues using Diffusion Models [57.47799823804519]
We are inspired by how musicians compose music not just from a movie script, but also through visualizations.
We propose MeLFusion, a model that can effectively use cues from a textual description and the corresponding image to synthesize music.
Our exhaustive experimental evaluation suggests that adding visual information to the music synthesis pipeline significantly improves the quality of generated music.
arXiv Detail & Related papers (2024-06-07T06:38:59Z) - In-depth analysis of music structure as a text network [7.735597173716555]
We focus on the fundamental elements of music and construct an evolutionary network from the perspective of music as a natural language.
We aim to comprehend the structural differences in music across different periods, enabling a more scientific exploration of music.
arXiv Detail & Related papers (2023-03-21T08:39:56Z) - MeloForm: Generating Melody with Musical Form based on Expert Systems
and Neural Networks [146.59245563763065]
MeloForm is a system that generates melody with musical form using expert systems and neural networks.
It can support various kinds of forms, such as verse and chorus form, rondo form, variational form, sonata form, etc.
arXiv Detail & Related papers (2022-08-30T15:44:15Z) - Structure-Enhanced Pop Music Generation via Harmony-Aware Learning [20.06867705303102]
We propose to leverage harmony-aware learning for structure-enhanced pop music generation.
Results of subjective and objective evaluations demonstrate that Harmony-Aware Hierarchical Music Transformer (HAT) significantly improves the quality of generated music.
arXiv Detail & Related papers (2021-09-14T05:04:13Z) - MusicBERT: Symbolic Music Understanding with Large-Scale Pre-Training [97.91071692716406]
Symbolic music understanding refers to the understanding of music from the symbolic data.
MusicBERT is a large-scale pre-trained model for music understanding.
arXiv Detail & Related papers (2021-06-10T10:13:05Z) - Music Harmony Generation, through Deep Learning and Using a
Multi-Objective Evolutionary Algorithm [0.0]
This paper introduces a genetic multi-objective evolutionary optimization algorithm for the generation of polyphonic music.
One of the goals is the rules and regulations of music, which, along with the other two goals, including the scores of music experts and ordinary listeners, fits the cycle of evolution to get the most optimal response.
The results show that the proposed method is able to generate difficult and pleasant pieces with desired styles and lengths, along with harmonic sounds that follow the grammar while attracting the listener, at the same time.
arXiv Detail & Related papers (2021-02-16T05:05:54Z) - Music Gesture for Visual Sound Separation [121.36275456396075]
"Music Gesture" is a keypoint-based structured representation to explicitly model the body and finger movements of musicians when they perform music.
We first adopt a context-aware graph network to integrate visual semantic context with body dynamics, and then apply an audio-visual fusion model to associate body movements with the corresponding audio signals.
arXiv Detail & Related papers (2020-04-20T17:53:46Z) - Exploring Inherent Properties of the Monophonic Melody of Songs [10.055143995729415]
We propose a set of interpretable features on monophonic melody for computational purposes.
These features are defined not only in mathematical form, but also with some considerations on composers 'intuition.
These features are considered by people universally in many genres of songs, even for atonal composition practices.
arXiv Detail & Related papers (2020-03-20T14:13:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.