Can GAN originate new electronic dance music genres? -- Generating novel
rhythm patterns using GAN with Genre Ambiguity Loss
- URL: http://arxiv.org/abs/2011.13062v1
- Date: Wed, 25 Nov 2020 23:22:12 GMT
- Title: Can GAN originate new electronic dance music genres? -- Generating novel
rhythm patterns using GAN with Genre Ambiguity Loss
- Authors: Nao Tokui
- Abstract summary: This paper focuses on music generation, especially rhythm patterns of electronic dance music, and discusses if we can use deep learning to generate novel rhythms.
We extend the framework of Generative Adversarial Networks(GAN) and encourage it to diverge from the dataset's inherent distributions.
The paper shows that our proposed GAN can generate rhythm patterns that sound like music rhythms but do not belong to any genres in the training dataset.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Since the introduction of deep learning, researchers have proposed content
generation systems using deep learning and proved that they are competent to
generate convincing content and artistic output, including music. However, one
can argue that these deep learning-based systems imitate and reproduce the
patterns inherent within what humans have created, instead of generating
something new and creative. This paper focuses on music generation, especially
rhythm patterns of electronic dance music, and discusses if we can use deep
learning to generate novel rhythms, interesting patterns not found in the
training dataset. We extend the framework of Generative Adversarial
Networks(GAN) and encourage it to diverge from the dataset's inherent
distributions by adding additional classifiers to the framework. The paper
shows that our proposed GAN can generate rhythm patterns that sound like music
rhythms but do not belong to any genres in the training dataset. The source
code, generated rhythm patterns, and a supplementary plugin software for a
popular Digital Audio Workstation software are available on our website.
Related papers
- MuPT: A Generative Symbolic Music Pretrained Transformer [56.09299510129221]
We explore the application of Large Language Models (LLMs) to the pre-training of music.
To address the challenges associated with misaligned measures from different tracks during generation, we propose a Synchronized Multi-Track ABC Notation (SMT-ABC Notation)
Our contributions include a series of models capable of handling up to 8192 tokens, covering 90% of the symbolic music data in our training set.
arXiv Detail & Related papers (2024-04-09T15:35:52Z) - An Autoethnographic Exploration of XAI in Algorithmic Composition [7.775986202112564]
This paper introduces an autoethnographic study of the use of the MeasureVAE generative music XAI model with interpretable latent dimensions trained on Irish music.
Findings suggest that the exploratory nature of the music-making workflow foregrounds musical features of the training dataset rather than features of the generative model itself.
arXiv Detail & Related papers (2023-08-11T12:03:17Z) - Quantized GAN for Complex Music Generation from Dance Videos [48.196705493763986]
We present Dance2Music-GAN (D2M-GAN), a novel adversarial multi-modal framework that generates musical samples conditioned on dance videos.
Our proposed framework takes dance video frames and human body motion as input, and learns to generate music samples that plausibly accompany the corresponding input.
arXiv Detail & Related papers (2022-04-01T17:53:39Z) - Music-to-Dance Generation with Optimal Transport [48.92483627635586]
We propose a Music-to-Dance with Optimal Transport Network (MDOT-Net) for learning to generate 3D dance choreographs from music.
We introduce an optimal transport distance for evaluating the authenticity of the generated dance distribution and a Gromov-Wasserstein distance to measure the correspondence between the dance distribution and the input music.
arXiv Detail & Related papers (2021-12-03T09:37:26Z) - GANs & Reels: Creating Irish Music using a Generative Adversarial
Network [2.6604997762611204]
We present a method for algorithmic melody generation using a generative adversarial network without recurrent components.
Music generation has been successfully done using recurrent neural networks, where the model learns sequence information that can help create authentic sounding melodies.
arXiv Detail & Related papers (2020-10-29T17:16:22Z) - Melody-Conditioned Lyrics Generation with SeqGANs [81.2302502902865]
We propose an end-to-end melody-conditioned lyrics generation system based on Sequence Generative Adversarial Networks (SeqGAN)
We show that the input conditions have no negative impact on the evaluation metrics while enabling the network to produce more meaningful results.
arXiv Detail & Related papers (2020-10-28T02:35:40Z) - Artificial Neural Networks Jamming on the Beat [20.737171876839238]
The paper presents a large dataset of drum patterns alongside with corresponding melodies.
exploring a latent space of drum patterns one could generate new drum patterns with a given music style.
A simple artificial neural network could be trained to generate melodies corresponding with these drum patters used as inputs.
arXiv Detail & Related papers (2020-07-13T10:09:20Z) - Incorporating Music Knowledge in Continual Dataset Augmentation for
Music Generation [69.06413031969674]
Aug-Gen is a method of dataset augmentation for any music generation system trained on a resource-constrained domain.
We apply Aug-Gen to Transformer-based chorale generation in the style of J.S. Bach, and show that this allows for longer training and results in better generative output.
arXiv Detail & Related papers (2020-06-23T21:06:15Z) - From Artificial Neural Networks to Deep Learning for Music Generation --
History, Concepts and Trends [0.0]
This paper provides a tutorial on music generation based on deep learning techniques.
It analyzes some early works from the late 1980s using artificial neural networks for music generation.
arXiv Detail & Related papers (2020-04-07T00:33:56Z) - RL-Duet: Online Music Accompaniment Generation Using Deep Reinforcement
Learning [69.20460466735852]
This paper presents a deep reinforcement learning algorithm for online accompaniment generation.
The proposed algorithm is able to respond to the human part and generate a melodic, harmonic and diverse machine part.
arXiv Detail & Related papers (2020-02-08T03:53:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.