A-Muze-Net: Music Generation by Composing the Harmony based on the
Generated Melody
- URL: http://arxiv.org/abs/2111.12986v1
- Date: Thu, 25 Nov 2021 09:45:53 GMT
- Title: A-Muze-Net: Music Generation by Composing the Harmony based on the
Generated Melody
- Authors: Or Goren, Eliya Nachmani, Lior Wolf
- Abstract summary: We present a method for the generation of Midi files of piano music.
The method models the right and left hands using two networks, where the left hand is conditioned on the right hand.
The Midi is represented in a way that is invariant to the musical scale, and the melody is represented, for the purpose of conditioning the harmony.
- Score: 91.22679787578438
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present a method for the generation of Midi files of piano music. The
method models the right and left hands using two networks, where the left hand
is conditioned on the right hand. This way, the melody is generated before the
harmony. The Midi is represented in a way that is invariant to the musical
scale, and the melody is represented, for the purpose of conditioning the
harmony, by the content of each bar, viewed as a chord. Finally, notes are
added randomly, based on this chord representation, in order to enrich the
generated audio. Our experiments show a significant improvement over the state
of the art for training on such datasets, and demonstrate the contribution of
each of the novel components.
Related papers
- Melody Is All You Need For Music Generation [10.366088659024685]
We present the Melody Guided Music Generation (MG2) model, a novel approach using melody to guide the text-to-music generation.
The proposed MG2 model surpasses current open-source text-to-music generation models, utilizing fewer than 1/3 of the parameters and less than 1/200 of the training data.
arXiv Detail & Related papers (2024-09-30T11:13:35Z) - Emotion-Driven Melody Harmonization via Melodic Variation and Functional Representation [16.790582113573453]
Emotion-driven melody aims to generate diverse harmonies for a single melody to convey desired emotions.
Previous research found it hard to alter the perceived emotional valence of lead sheets only by harmonizing the same melody with different chords.
In this paper, we propose a novel functional representation for symbolic music.
arXiv Detail & Related papers (2024-07-29T17:05:12Z) - InstructME: An Instruction Guided Music Edit And Remix Framework with
Latent Diffusion Models [42.2977676825086]
In this paper, we develop InstructME, an Instruction guided Music Editing and remixing framework based on latent diffusion models.
Our framework fortifies the U-Net with multi-scale aggregation in order to maintain consistency before and after editing.
Our proposed method significantly surpasses preceding systems in music quality, text relevance and harmony.
arXiv Detail & Related papers (2023-08-28T07:11:42Z) - Melody transcription via generative pre-training [86.08508957229348]
Key challenge in melody transcription is building methods which can handle broad audio containing any number of instrument ensembles and musical styles.
To confront this challenge, we leverage representations from Jukebox (Dhariwal et al. 2020), a generative model of broad music audio.
We derive a new dataset containing $50$ hours of melody transcriptions from crowdsourced annotations of broad music.
arXiv Detail & Related papers (2022-12-04T18:09:23Z) - MeloForm: Generating Melody with Musical Form based on Expert Systems
and Neural Networks [146.59245563763065]
MeloForm is a system that generates melody with musical form using expert systems and neural networks.
It can support various kinds of forms, such as verse and chorus form, rondo form, variational form, sonata form, etc.
arXiv Detail & Related papers (2022-08-30T15:44:15Z) - Re-creation of Creations: A New Paradigm for Lyric-to-Melody Generation [158.54649047794794]
Re-creation of Creations (ROC) is a new paradigm for lyric-to-melody generation.
ROC achieves good lyric-melody feature alignment in lyric-to-melody generation.
arXiv Detail & Related papers (2022-08-11T08:44:47Z) - Chord-Conditioned Melody Choralization with Controllable Harmonicity and
Polyphonicity [75.02344976811062]
Melody choralization, i.e. generating a four-part chorale based on a user-given melody, has long been closely associated with J.S. Bach chorales.
Previous neural network-based systems rarely focus on chorale generation conditioned on a chord progression.
We propose DeepChoir, a melody choralization system, which can generate a four-part chorale for a given melody conditioned on a chord progression.
arXiv Detail & Related papers (2022-02-17T02:59:36Z) - TeleMelody: Lyric-to-Melody Generation with a Template-Based Two-Stage
Method [92.36505210982648]
TeleMelody is a two-stage lyric-to-melody generation system with music template.
It generates melodies with higher quality, better controllability, and less requirement on paired lyric-melody data.
arXiv Detail & Related papers (2021-09-20T15:19:33Z) - Controllable deep melody generation via hierarchical music structure
representation [14.891975420982511]
MusicFrameworks is a hierarchical music structure representation and a multi-step generative process to create a full-length melody.
To generate melody in each phrase, we generate rhythm and basic melody using two separate transformer-based networks.
To customize or add variety, one can alter chords, basic melody, and rhythm structure in the music frameworks, letting our networks generate the melody accordingly.
arXiv Detail & Related papers (2021-09-02T01:31:14Z) - Differential Music: Automated Music Generation Using LSTM Networks with
Representation Based on Melodic and Harmonic Intervals [0.0]
This paper presents a generative AI model for automated music composition with LSTM networks.
It takes a novel approach at encoding musical information which is based on movement in music rather than absolute pitch.
Experimental results show promise as they sound musical and tonal.
arXiv Detail & Related papers (2021-08-23T23:51:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.