GANs & Reels: Creating Irish Music using a Generative Adversarial
Network
- URL: http://arxiv.org/abs/2010.15772v1
- Date: Thu, 29 Oct 2020 17:16:22 GMT
- Title: GANs & Reels: Creating Irish Music using a Generative Adversarial
Network
- Authors: Antonina Kolokolova, Mitchell Billard, Robert Bishop, Moustafa Elsisy,
Zachary Northcott, Laura Graves, Vineel Nagisetty, Heather Patey
- Abstract summary: We present a method for algorithmic melody generation using a generative adversarial network without recurrent components.
Music generation has been successfully done using recurrent neural networks, where the model learns sequence information that can help create authentic sounding melodies.
- Score: 2.6604997762611204
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper we present a method for algorithmic melody generation using a
generative adversarial network without recurrent components. Music generation
has been successfully done using recurrent neural networks, where the model
learns sequence information that can help create authentic sounding melodies.
Here, we use DC-GAN architecture with dilated convolutions and towers to
capture sequential information as spatial image information, and learn
long-range dependencies in fixed-length melody forms such as Irish traditional
reel.
Related papers
- Carnatic Raga Identification System using Rigorous Time-Delay Neural Network [0.0]
Large scale machine learning-based Raga identification continues to be a nontrivial issue in the computational aspects behind Carnatic music.
In this paper, the input sound is analyzed using a combination of steps including using a Discrete Fourier transformation and using Triangular Filtering to create custom bins of possible notes.
The goal of this program is to be able to effectively and efficiently label a much wider range of audio clips in more shrutis, ragas, and with more background noise.
arXiv Detail & Related papers (2024-05-25T01:31:58Z) - WuYun: Exploring hierarchical skeleton-guided melody generation using
knowledge-enhanced deep learning [26.515527387450636]
WuYun is a knowledge-enhanced deep learning architecture for improving structure of generated melodies.
We use music domain knowledge to extract melodic skeletons and employ sequence learning to reconstruct them.
We demonstrate that WuYun can generate melodies with better long-term structure and musicality and outperforms other state-of-the-art methods by 0.51 on average.
arXiv Detail & Related papers (2023-01-11T14:33:42Z) - Re-creation of Creations: A New Paradigm for Lyric-to-Melody Generation [158.54649047794794]
Re-creation of Creations (ROC) is a new paradigm for lyric-to-melody generation.
ROC achieves good lyric-melody feature alignment in lyric-to-melody generation.
arXiv Detail & Related papers (2022-08-11T08:44:47Z) - Music Generation Using an LSTM [52.77024349608834]
Long Short-Term Memory (LSTM) network structures have proven to be very useful for making predictions for the next output in a series.
We demonstrate an approach of music generation using Recurrent Neural Networks (RNN)
We provide a brief synopsis of the intuition, theory, and application of LSTMs in music generation, develop and present the network we found to best achieve this goal, identify and address issues and challenges faced, and include potential future improvements for our network.
arXiv Detail & Related papers (2022-03-23T00:13:41Z) - Music Generation using Deep Learning [10.155748914174003]
The proposed approach takes ABC notations from the Nottingham dataset and encodes it to beefed as input for the neural networks.
The primary objective is to input the neural networks with an arbitrary note, let the network process and augment a sequence based on the note until a good piece of music is produced.
arXiv Detail & Related papers (2021-05-19T10:27:58Z) - Hierarchical Recurrent Neural Networks for Conditional Melody Generation
with Long-term Structure [0.0]
We propose a conditional melody generation model based on a hierarchical recurrent neural network.
This model generates melodies with long-term structures based on given chord accompaniments.
Results from our listening test indicate that CM-HRNN outperforms AttentionRNN in terms of long-term structure and overall rating.
arXiv Detail & Related papers (2021-02-19T08:22:26Z) - Sequence Generation using Deep Recurrent Networks and Embeddings: A
study case in music [69.2737664640826]
This paper evaluates different types of memory mechanisms (memory cells) and analyses their performance in the field of music composition.
A set of quantitative metrics is presented to evaluate the performance of the proposed architecture automatically.
arXiv Detail & Related papers (2020-12-02T14:19:19Z) - Can GAN originate new electronic dance music genres? -- Generating novel
rhythm patterns using GAN with Genre Ambiguity Loss [0.0]
This paper focuses on music generation, especially rhythm patterns of electronic dance music, and discusses if we can use deep learning to generate novel rhythms.
We extend the framework of Generative Adversarial Networks(GAN) and encourage it to diverge from the dataset's inherent distributions.
The paper shows that our proposed GAN can generate rhythm patterns that sound like music rhythms but do not belong to any genres in the training dataset.
arXiv Detail & Related papers (2020-11-25T23:22:12Z) - Melody-Conditioned Lyrics Generation with SeqGANs [81.2302502902865]
We propose an end-to-end melody-conditioned lyrics generation system based on Sequence Generative Adversarial Networks (SeqGAN)
We show that the input conditions have no negative impact on the evaluation metrics while enabling the network to produce more meaningful results.
arXiv Detail & Related papers (2020-10-28T02:35:40Z) - Incremental Training of a Recurrent Neural Network Exploiting a
Multi-Scale Dynamic Memory [79.42778415729475]
We propose a novel incrementally trained recurrent architecture targeting explicitly multi-scale learning.
We show how to extend the architecture of a simple RNN by separating its hidden state into different modules.
We discuss a training algorithm where new modules are iteratively added to the model to learn progressively longer dependencies.
arXiv Detail & Related papers (2020-06-29T08:35:49Z) - RL-Duet: Online Music Accompaniment Generation Using Deep Reinforcement
Learning [69.20460466735852]
This paper presents a deep reinforcement learning algorithm for online accompaniment generation.
The proposed algorithm is able to respond to the human part and generate a melodic, harmonic and diverse machine part.
arXiv Detail & Related papers (2020-02-08T03:53:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.