AccoMontage-3: Full-Band Accompaniment Arrangement via Sequential Style
Transfer and Multi-Track Function Prior
- URL: http://arxiv.org/abs/2310.16334v1
- Date: Wed, 25 Oct 2023 03:30:37 GMT
- Title: AccoMontage-3: Full-Band Accompaniment Arrangement via Sequential Style
Transfer and Multi-Track Function Prior
- Authors: Jingwei Zhao, Gus Xia, Ye Wang
- Abstract summary: AccoMontage-3 is a symbolic music automation system capable of generating full-band accompaniment.
The system learns to generate full-band accompaniment in a self-supervised fashion.
- Score: 9.028718251389495
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose AccoMontage-3, a symbolic music automation system capable of
generating multi-track, full-band accompaniment based on the input of a lead
melody with chords (i.e., a lead sheet). The system contains three modular
components, each modelling a vital aspect of full-band composition. The first
component is a piano arranger that generates piano accompaniment for the lead
sheet by transferring texture styles to the chords using latent chord-texture
disentanglement and heuristic retrieval of texture donors. The second component
orchestrates the piano accompaniment score into full-band arrangement according
to the orchestration style encoded by individual track functions. The third
component, which connects the previous two, is a prior model characterizing the
global structure of orchestration style over the whole piece of music. From end
to end, the system learns to generate full-band accompaniment in a
self-supervised fashion, applying style transfer at two levels of polyphonic
composition: texture and orchestration. Experiments show that our system
outperforms the baselines significantly, and the modular design offers
effective controls in a musically meaningful way.
Related papers
- MuseBarControl: Enhancing Fine-Grained Control in Symbolic Music Generation through Pre-Training and Counterfactual Loss [51.85076222868963]
We introduce a pre-training task designed to link control signals directly with corresponding musical tokens.
We then implement a novel counterfactual loss that promotes better alignment between the generated music and the control prompts.
arXiv Detail & Related papers (2024-07-05T08:08:22Z) - DiffMoog: a Differentiable Modular Synthesizer for Sound Matching [48.33168531500444]
DiffMoog is a differentiable modular synthesizer with a comprehensive set of modules typically found in commercial instruments.
Being differentiable, it allows integration into neural networks, enabling automated sound matching.
We introduce an open-source platform that comprises DiffMoog and an end-to-end sound matching framework.
arXiv Detail & Related papers (2024-01-23T08:59:21Z) - Graph-based Polyphonic Multitrack Music Generation [9.701208207491879]
This paper introduces a novel graph representation for music and a deep Variational Autoencoder that generates the structure and the content of musical graphs separately.
By separating the structure and content of musical graphs, it is possible to condition generation by specifying which instruments are played at certain times.
arXiv Detail & Related papers (2023-07-27T15:18:50Z) - Unsupervised Melody-to-Lyric Generation [91.29447272400826]
We propose a method for generating high-quality lyrics without training on any aligned melody-lyric data.
We leverage the segmentation and rhythm alignment between melody and lyrics to compile the given melody into decoding constraints.
Our model can generate high-quality lyrics that are more on-topic, singable, intelligible, and coherent than strong baselines.
arXiv Detail & Related papers (2023-05-30T17:20:25Z) - Compose & Embellish: Well-Structured Piano Performance Generation via A
Two-Stage Approach [36.49582705724548]
We devise a two-stage Transformer-based framework that Composes a lead sheet first, and then Embellishes it with accompaniment and expressive touches.
Our objective and subjective experiments show that Compose & Embellish shrinks the gap in structureness between a current state of the art and real performances by half, and improves other musical aspects such as richness and coherence as well.
arXiv Detail & Related papers (2022-09-17T01:20:59Z) - Symphony Generation with Permutation Invariant Language Model [57.75739773758614]
We present a symbolic symphony music generation solution, SymphonyNet, based on a permutation invariant language model.
A novel transformer decoder architecture is introduced as backbone for modeling extra-long sequences of symphony tokens.
Our empirical results show that our proposed approach can generate coherent, novel, complex and harmonious symphony compared to human composition.
arXiv Detail & Related papers (2022-05-10T13:08:49Z) - A-Muze-Net: Music Generation by Composing the Harmony based on the
Generated Melody [91.22679787578438]
We present a method for the generation of Midi files of piano music.
The method models the right and left hands using two networks, where the left hand is conditioned on the right hand.
The Midi is represented in a way that is invariant to the musical scale, and the melody is represented, for the purpose of conditioning the harmony.
arXiv Detail & Related papers (2021-11-25T09:45:53Z) - Controllable deep melody generation via hierarchical music structure
representation [14.891975420982511]
MusicFrameworks is a hierarchical music structure representation and a multi-step generative process to create a full-length melody.
To generate melody in each phrase, we generate rhythm and basic melody using two separate transformer-based networks.
To customize or add variety, one can alter chords, basic melody, and rhythm structure in the music frameworks, letting our networks generate the melody accordingly.
arXiv Detail & Related papers (2021-09-02T01:31:14Z) - Learning Interpretable Representation for Controllable Polyphonic Music
Generation [5.01266258109807]
We design a novel architecture, that effectively learns two interpretable latent factors of polyphonic music: chord and texture.
We show that such chord-texture disentanglement provides a controllable generation pathway leading to a wide spectrum of applications.
arXiv Detail & Related papers (2020-08-17T07:11:16Z) - Music Gesture for Visual Sound Separation [121.36275456396075]
"Music Gesture" is a keypoint-based structured representation to explicitly model the body and finger movements of musicians when they perform music.
We first adopt a context-aware graph network to integrate visual semantic context with body dynamics, and then apply an audio-visual fusion model to associate body movements with the corresponding audio signals.
arXiv Detail & Related papers (2020-04-20T17:53:46Z) - Continuous Melody Generation via Disentangled Short-Term Representations
and Structural Conditions [14.786601824794369]
We present a model for composing melodies given a user specified symbolic scenario combined with a previous music context.
Our model is capable of generating long melodies by regarding 8-beat note sequences as basic units, and shares consistent rhythm pattern structure with another specific song.
Results show that the music generated by our model tends to have salient repetition structures, rich motives, and stable rhythm patterns.
arXiv Detail & Related papers (2020-02-05T06:23:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.