BacHMMachine: An Interpretable and Scalable Model for Algorithmic
Harmonization for Four-part Baroque Chorales
- URL: http://arxiv.org/abs/2109.07623v1
- Date: Wed, 15 Sep 2021 23:39:45 GMT
- Title: BacHMMachine: An Interpretable and Scalable Model for Algorithmic
Harmonization for Four-part Baroque Chorales
- Authors: Yunyao Zhu, Stephen Hahn, Simon Mak, Yue Jiang, Cynthia Rudin
- Abstract summary: BacHMMachine employs a "theory-driven" framework guided by music composition principles.
It provides a probabilistic framework for learning key modulations and chordal progressions from a given melodic line.
It results in vast decreases in computational burden and greater interpretability.
- Score: 23.64897650817862
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Algorithmic harmonization - the automated harmonization of a musical piece
given its melodic line - is a challenging problem that has garnered much
interest from both music theorists and computer scientists. One genre of
particular interest is the four-part Baroque chorales of J.S. Bach. Methods for
algorithmic chorale harmonization typically adopt a black-box, "data-driven"
approach: they do not explicitly integrate principles from music theory but
rely on a complex learning model trained with a large amount of chorale data.
We propose instead a new harmonization model, called BacHMMachine, which
employs a "theory-driven" framework guided by music composition principles,
along with a "data-driven" model for learning compositional features within
this framework. As its name suggests, BacHMMachine uses a novel Hidden Markov
Model based on key and chord transitions, providing a probabilistic framework
for learning key modulations and chordal progressions from a given melodic
line. This allows for the generation of creative, yet musically coherent
chorale harmonizations; integrating compositional principles allows for a much
simpler model that results in vast decreases in computational burden and
greater interpretability compared to state-of-the-art algorithmic harmonization
methods, at no penalty to quality of harmonization or musicality. We
demonstrate this improvement via comprehensive experiments and Turing tests
comparing BacHMMachine to existing methods.
Related papers
- Promises and Pitfalls of Generative Masked Language Modeling: Theoretical Framework and Practical Guidelines [74.42485647685272]
We focus on Generative Masked Language Models (GMLMs)
We train a model to fit conditional probabilities of the data distribution via masking, which are subsequently used as inputs to a Markov Chain to draw samples from the model.
We adapt the T5 model for iteratively-refined parallel decoding, achieving 2-3x speedup in machine translation with minimal sacrifice in quality.
arXiv Detail & Related papers (2024-07-22T18:00:00Z) - A Survey of Music Generation in the Context of Interaction [3.6522809408725223]
Machine learning has been successfully used to compose and generate music, both melodies and polyphonic pieces.
Most of these models are not suitable for human-machine co-creation through live interaction.
arXiv Detail & Related papers (2024-02-23T12:41:44Z) - Simple and Controllable Music Generation [94.61958781346176]
MusicGen is a single Language Model (LM) that operates over several streams of compressed discrete music representation, i.e., tokens.
Unlike prior work, MusicGen is comprised of a single-stage transformer LM together with efficient token interleaving patterns.
arXiv Detail & Related papers (2023-06-08T15:31:05Z) - Emotion-Conditioned Melody Harmonization with Hierarchical Variational
Autoencoder [11.635877697635449]
We propose a novel LSTM-based Hierarchical Variational Auto-Encoder (LHVAE) to investigate the influence of emotional conditions on melody harmonization.
LHVAE incorporates latent variables and emotional conditions at different levels to model the global and local music properties.
Objective experimental results show that our proposed model outperforms other LSTM-based models.
arXiv Detail & Related papers (2023-06-06T14:28:57Z) - A framework to compare music generative models using automatic
evaluation metrics extended to rhythm [69.2737664640826]
This paper takes the framework proposed in a previous research that did not consider rhythm to make a series of design decisions, then, rhythm support is added to evaluate the performance of two RNN memory cells in the creation of monophonic music.
The model considers the handling of music transposition and the framework evaluates the quality of the generated pieces using automatic quantitative metrics based on geometry which have rhythm support added as well.
arXiv Detail & Related papers (2021-01-19T15:04:46Z) - Bach or Mock? A Grading Function for Chorales in the Style of J.S. Bach [74.09517278785519]
We introduce a grading function that evaluates four-part chorales in the style of J.S. Bach along important musical features.
We show that the function is both interpretable and outperforms human experts at discriminating Bach chorales from model-generated ones.
arXiv Detail & Related papers (2020-06-23T21:02:55Z) - Learning Gaussian Graphical Models via Multiplicative Weights [54.252053139374205]
We adapt an algorithm of Klivans and Meka based on the method of multiplicative weight updates.
The algorithm enjoys a sample complexity bound that is qualitatively similar to others in the literature.
It has a low runtime $O(mp2)$ in the case of $m$ samples and $p$ nodes, and can trivially be implemented in an online manner.
arXiv Detail & Related papers (2020-02-20T10:50:58Z) - RL-Duet: Online Music Accompaniment Generation Using Deep Reinforcement
Learning [69.20460466735852]
This paper presents a deep reinforcement learning algorithm for online accompaniment generation.
The proposed algorithm is able to respond to the human part and generate a melodic, harmonic and diverse machine part.
arXiv Detail & Related papers (2020-02-08T03:53:52Z) - Continuous Melody Generation via Disentangled Short-Term Representations
and Structural Conditions [14.786601824794369]
We present a model for composing melodies given a user specified symbolic scenario combined with a previous music context.
Our model is capable of generating long melodies by regarding 8-beat note sequences as basic units, and shares consistent rhythm pattern structure with another specific song.
Results show that the music generated by our model tends to have salient repetition structures, rich motives, and stable rhythm patterns.
arXiv Detail & Related papers (2020-02-05T06:23:44Z) - Modeling Musical Structure with Artificial Neural Networks [0.0]
I explore the application of artificial neural networks to different aspects of musical structure modeling.
I show how a connectionist model, the Gated Autoencoder (GAE), can be employed to learn transformations between musical fragments.
I propose a special predictive training of the GAE, which yields a representation of polyphonic music as a sequence of intervals.
arXiv Detail & Related papers (2020-01-06T18:35:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.