Artificial Neural Networks Jamming on the Beat
- URL: http://arxiv.org/abs/2007.06284v3
- Date: Thu, 20 May 2021 07:00:23 GMT
- Title: Artificial Neural Networks Jamming on the Beat
- Authors: Alexey Tikhonov, Ivan P. Yamshchikov
- Abstract summary: The paper presents a large dataset of drum patterns alongside with corresponding melodies.
exploring a latent space of drum patterns one could generate new drum patterns with a given music style.
A simple artificial neural network could be trained to generate melodies corresponding with these drum patters used as inputs.
- Score: 20.737171876839238
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper addresses the issue of long-scale correlations that is
characteristic for symbolic music and is a challenge for modern generative
algorithms. It suggests a very simple workaround for this challenge, namely,
generation of a drum pattern that could be further used as a foundation for
melody generation. The paper presents a large dataset of drum patterns
alongside with corresponding melodies. It explores two possible methods for
drum pattern generation. Exploring a latent space of drum patterns one could
generate new drum patterns with a given music style. Finally, the paper
demonstrates that a simple artificial neural network could be trained to
generate melodies corresponding with these drum patters used as inputs.
Resulting system could be used for end-to-end generation of symbolic music with
song-like structure and higher long-scale correlations between the notes.
Related papers
- Graph-based Polyphonic Multitrack Music Generation [9.701208207491879]
This paper introduces a novel graph representation for music and a deep Variational Autoencoder that generates the structure and the content of musical graphs separately.
By separating the structure and content of musical graphs, it is possible to condition generation by specifying which instruments are played at certain times.
arXiv Detail & Related papers (2023-07-27T15:18:50Z) - Unsupervised Melody-to-Lyric Generation [91.29447272400826]
We propose a method for generating high-quality lyrics without training on any aligned melody-lyric data.
We leverage the segmentation and rhythm alignment between melody and lyrics to compile the given melody into decoding constraints.
Our model can generate high-quality lyrics that are more on-topic, singable, intelligible, and coherent than strong baselines.
arXiv Detail & Related papers (2023-05-30T17:20:25Z) - Unsupervised Melody-Guided Lyrics Generation [84.22469652275714]
We propose to generate pleasantly listenable lyrics without training on melody-lyric aligned data.
We leverage the crucial alignments between melody and lyrics and compile the given melody into constraints to guide the generation process.
arXiv Detail & Related papers (2023-05-12T20:57:20Z) - Museformer: Transformer with Fine- and Coarse-Grained Attention for
Music Generation [138.74751744348274]
We propose Museformer, a Transformer with a novel fine- and coarse-grained attention for music generation.
Specifically, with the fine-grained attention, a token of a specific bar directly attends to all the tokens of the bars that are most relevant to music structures.
With the coarse-grained attention, a token only attends to the summarization of the other bars rather than each token of them so as to reduce the computational cost.
arXiv Detail & Related papers (2022-10-19T07:31:56Z) - Setting the rhythm scene: deep learning-based drum loop generation from
arbitrary language cues [0.0]
We present a novel method that generates 2 compasses of a 4-piece drum pattern that embodies the "mood" of a language cue.
We envision this tool as composition aid for electronic music and audiovisual soundtrack production, or an improvisation tool for live performance.
In order to produce the training samples for this model, besides manual annotation of the "scene" or "mood" terms, we have designed a novel method to extract the consensus drum track of any song.
arXiv Detail & Related papers (2022-09-20T21:53:35Z) - Generating Coherent Drum Accompaniment With Fills And Improvisations [8.334918207379172]
We tackle the task of drum pattern generation conditioned on the accompanying music played by four melodic instruments.
We propose a novelty function to capture the extent of improvisation in a bar relative to its neighbors.
We train a model to predict improvisation locations from the melodic accompaniment tracks.
arXiv Detail & Related papers (2022-09-01T08:31:26Z) - Re-creation of Creations: A New Paradigm for Lyric-to-Melody Generation [158.54649047794794]
Re-creation of Creations (ROC) is a new paradigm for lyric-to-melody generation.
ROC achieves good lyric-melody feature alignment in lyric-to-melody generation.
arXiv Detail & Related papers (2022-08-11T08:44:47Z) - Conditional Drums Generation using Compound Word Representations [4.435094091999926]
We tackle the task of conditional drums generation using a novel data encoding scheme inspired by Compound Word representation.
We present a sequence-to-sequence architecture where a Bidirectional Long short-term memory (BiLSTM) receives information about the conditioning parameters.
A Transformer-based Decoder with relative global attention produces the generated drum sequences.
arXiv Detail & Related papers (2022-02-09T13:49:27Z) - Can GAN originate new electronic dance music genres? -- Generating novel
rhythm patterns using GAN with Genre Ambiguity Loss [0.0]
This paper focuses on music generation, especially rhythm patterns of electronic dance music, and discusses if we can use deep learning to generate novel rhythms.
We extend the framework of Generative Adversarial Networks(GAN) and encourage it to diverge from the dataset's inherent distributions.
The paper shows that our proposed GAN can generate rhythm patterns that sound like music rhythms but do not belong to any genres in the training dataset.
arXiv Detail & Related papers (2020-11-25T23:22:12Z) - Melody-Conditioned Lyrics Generation with SeqGANs [81.2302502902865]
We propose an end-to-end melody-conditioned lyrics generation system based on Sequence Generative Adversarial Networks (SeqGAN)
We show that the input conditions have no negative impact on the evaluation metrics while enabling the network to produce more meaningful results.
arXiv Detail & Related papers (2020-10-28T02:35:40Z) - RL-Duet: Online Music Accompaniment Generation Using Deep Reinforcement
Learning [69.20460466735852]
This paper presents a deep reinforcement learning algorithm for online accompaniment generation.
The proposed algorithm is able to respond to the human part and generate a melodic, harmonic and diverse machine part.
arXiv Detail & Related papers (2020-02-08T03:53:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.