Conditional Drums Generation using Compound Word Representations
- URL: http://arxiv.org/abs/2202.04464v1
- Date: Wed, 9 Feb 2022 13:49:27 GMT
- Title: Conditional Drums Generation using Compound Word Representations
- Authors: Dimos Makris, Guo Zixun, Maximos Kaliakatsos-Papakostas, Dorien
Herremans
- Abstract summary: We tackle the task of conditional drums generation using a novel data encoding scheme inspired by Compound Word representation.
We present a sequence-to-sequence architecture where a Bidirectional Long short-term memory (BiLSTM) receives information about the conditioning parameters.
A Transformer-based Decoder with relative global attention produces the generated drum sequences.
- Score: 4.435094091999926
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The field of automatic music composition has seen great progress in recent
years, specifically with the invention of transformer-based architectures. When
using any deep learning model which considers music as a sequence of events
with multiple complex dependencies, the selection of a proper data
representation is crucial. In this paper, we tackle the task of conditional
drums generation using a novel data encoding scheme inspired by the Compound
Word representation, a tokenization process of sequential data. Therefore, we
present a sequence-to-sequence architecture where a Bidirectional Long
short-term memory (BiLSTM) Encoder receives information about the conditioning
parameters (i.e., accompanying tracks and musical attributes), while a
Transformer-based Decoder with relative global attention produces the generated
drum sequences. We conducted experiments to thoroughly compare the
effectiveness of our method to several baselines. Quantitative evaluation shows
that our model is able to generate drums sequences that have similar
statistical distributions and characteristics to the training corpus. These
features include syncopation, compression ratio, and symmetry among others. We
also verified, through a listening test, that generated drum sequences sound
pleasant, natural and coherent while they "groove" with the given
accompaniment.
Related papers
- SynthSOD: Developing an Heterogeneous Dataset for Orchestra Music Source Separation [7.428668206443388]
We introduce a novel multitrack dataset called SynthSOD, developed using a set of simulation techniques to create a realistic training set.
We demonstrate the application of a widely used baseline music separation model trained on our synthesized dataset w.r.t to the well-known EnsembleSet.
arXiv Detail & Related papers (2024-09-17T08:58:33Z) - Text Conditioned Symbolic Drumbeat Generation using Latent Diffusion Models [0.0]
This study introduces a text-conditioned approach to generating drumbeats with Latent Diffusion Models (LDMs)
By pretraining a text and drumbeat encoder through contrastive learning within a multimodal network, we align the modalities of text and music closely.
We show that the generated drumbeats are novel and apt to the prompt text, and comparable in quality to those created by human musicians.
arXiv Detail & Related papers (2024-08-05T13:23:05Z) - Multi-view MidiVAE: Fusing Track- and Bar-view Representations for Long
Multi-track Symbolic Music Generation [50.365392018302416]
We propose Multi-view MidiVAE, as one of the pioneers in VAE methods that effectively model and generate long multi-track symbolic music.
We focus on instrumental characteristics and harmony as well as global and local information about the musical composition by employing a hybrid variational encoding-decoding strategy.
arXiv Detail & Related papers (2024-01-15T08:41:01Z) - Symphony Generation with Permutation Invariant Language Model [57.75739773758614]
We present a symbolic symphony music generation solution, SymphonyNet, based on a permutation invariant language model.
A novel transformer decoder architecture is introduced as backbone for modeling extra-long sequences of symphony tokens.
Our empirical results show that our proposed approach can generate coherent, novel, complex and harmonious symphony compared to human composition.
arXiv Detail & Related papers (2022-05-10T13:08:49Z) - FIGARO: Generating Symbolic Music with Fine-Grained Artistic Control [25.95359681751144]
We propose the self-supervised description-to-sequence task, which allows for fine-grained controllable generation on a global level.
We do so by extracting high-level features about the target sequence and learning the conditional distribution of sequences given the corresponding high-level description in a sequence-to-sequence modelling setup.
By combining learned high level features with domain knowledge, which acts as a strong inductive bias, the model achieves state-of-the-art results in controllable symbolic music generation and generalizes well beyond the training distribution.
arXiv Detail & Related papers (2022-01-26T13:51:19Z) - Generating Lead Sheets with Affect: A Novel Conditional seq2seq
Framework [3.029434408969759]
We present a novel approach for calculating the positivity or negativity of a chord progression within a lead sheet.
Our approach is similar to a Neural Machine Translation (NMT) problem, as we include high-level conditions in the encoder part of the sequence-to-sequence architectures.
The proposed strategy is able to generate lead sheets in a controllable manner, resulting in distributions of musical attributes similar to those of the training dataset.
arXiv Detail & Related papers (2021-04-27T09:04:21Z) - A framework to compare music generative models using automatic
evaluation metrics extended to rhythm [69.2737664640826]
This paper takes the framework proposed in a previous research that did not consider rhythm to make a series of design decisions, then, rhythm support is added to evaluate the performance of two RNN memory cells in the creation of monophonic music.
The model considers the handling of music transposition and the framework evaluates the quality of the generated pieces using automatic quantitative metrics based on geometry which have rhythm support added as well.
arXiv Detail & Related papers (2021-01-19T15:04:46Z) - Sequence Generation using Deep Recurrent Networks and Embeddings: A
study case in music [69.2737664640826]
This paper evaluates different types of memory mechanisms (memory cells) and analyses their performance in the field of music composition.
A set of quantitative metrics is presented to evaluate the performance of the proposed architecture automatically.
arXiv Detail & Related papers (2020-12-02T14:19:19Z) - Vector-Quantized Timbre Representation [53.828476137089325]
This paper targets a more flexible synthesis of an individual timbre by learning an approximate decomposition of its spectral properties with a set of generative features.
We introduce an auto-encoder with a discrete latent space that is disentangled from loudness in order to learn a quantized representation of a given timbre distribution.
We detail results for translating audio between orchestral instruments and singing voice, as well as transfers from vocal imitations to instruments.
arXiv Detail & Related papers (2020-07-13T12:35:45Z) - VaPar Synth -- A Variational Parametric Model for Audio Synthesis [78.3405844354125]
We present VaPar Synth - a Variational Parametric Synthesizer which utilizes a conditional variational autoencoder (CVAE) trained on a suitable parametric representation.
We demonstrate our proposed model's capabilities via the reconstruction and generation of instrumental tones with flexible control over their pitch.
arXiv Detail & Related papers (2020-03-30T16:05:47Z) - Continuous Melody Generation via Disentangled Short-Term Representations
and Structural Conditions [14.786601824794369]
We present a model for composing melodies given a user specified symbolic scenario combined with a previous music context.
Our model is capable of generating long melodies by regarding 8-beat note sequences as basic units, and shares consistent rhythm pattern structure with another specific song.
Results show that the music generated by our model tends to have salient repetition structures, rich motives, and stable rhythm patterns.
arXiv Detail & Related papers (2020-02-05T06:23:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.