Music Generation using Deep Learning
- URL: http://arxiv.org/abs/2105.09046v1
- Date: Wed, 19 May 2021 10:27:58 GMT
- Title: Music Generation using Deep Learning
- Authors: Vaishali Ingale, Anush Mohan, Divit Adlakha, Krishna Kumar and Mohit
Gupta
- Abstract summary: The proposed approach takes ABC notations from the Nottingham dataset and encodes it to beefed as input for the neural networks.
The primary objective is to input the neural networks with an arbitrary note, let the network process and augment a sequence based on the note until a good piece of music is produced.
- Score: 10.155748914174003
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: This paper explores the idea of utilising Long Short-Term Memory neural
networks (LSTMNN) for the generation of musical sequences in ABC notation. The
proposed approach takes ABC notations from the Nottingham dataset and encodes
it to beefed as input for the neural networks. The primary objective is to
input the neural networks with an arbitrary note, let the network process and
augment a sequence based on the note until a good piece of music is produced.
Multiple tunings have been done to amend the parameters of the network for
optimal generation. The output is assessed on the basis of rhythm, harmony, and
grammar accuracy.
Related papers
- N-Gram Unsupervised Compoundation and Feature Injection for Better
Symbolic Music Understanding [27.554853901252084]
Music sequences exhibit strong correlations between adjacent elements, making them prime candidates for N-gram techniques from Natural Language Processing (NLP)
In this paper, we propose a novel method, NG-Midiformer, for understanding symbolic music sequences that leverages the N-gram approach.
arXiv Detail & Related papers (2023-12-13T06:08:37Z) - Roman Numeral Analysis with Graph Neural Networks: Onset-wise
Predictions from Note-wise Features [7.817685358710508]
This paper presents a new approach to automatic Roman Numeral analysis in symbolic music.
We propose a new method based on Graph Neural Networks (GNNs) that enable the direct description and processing of each individual note in the score.
Our results demonstrate that ChordGNN outperforms existing state-of-the-art models.
arXiv Detail & Related papers (2023-07-07T12:20:56Z) - StitchNet: Composing Neural Networks from Pre-Trained Fragments [3.638431342539701]
We propose StitchNet, a novel neural network creation paradigm.
It stitches together fragments from multiple pre-trained neural networks.
We show that these fragments can be stitched together to create neural networks with accuracy comparable to that of traditionally trained networks.
arXiv Detail & Related papers (2023-01-05T08:02:30Z) - A Scalable Graph Neural Network Decoder for Short Block Codes [49.25571364253986]
We propose a novel decoding algorithm for short block codes based on an edge-weighted graph neural network (EW-GNN)
The EW-GNN decoder operates on the Tanner graph with an iterative message-passing structure.
We show that the EW-GNN decoder outperforms the BP and deep-learning-based BP methods in terms of the decoding error rate.
arXiv Detail & Related papers (2022-11-13T17:13:12Z) - Music Generation Using an LSTM [52.77024349608834]
Long Short-Term Memory (LSTM) network structures have proven to be very useful for making predictions for the next output in a series.
We demonstrate an approach of music generation using Recurrent Neural Networks (RNN)
We provide a brief synopsis of the intuition, theory, and application of LSTMs in music generation, develop and present the network we found to best achieve this goal, identify and address issues and challenges faced, and include potential future improvements for our network.
arXiv Detail & Related papers (2022-03-23T00:13:41Z) - Logsig-RNN: a novel network for robust and efficient skeleton-based
action recognition [3.775860173040509]
We propose a novel module, namely Logsig-RNN, which is the combination of the log-native layer and recurrent type neural networks (RNNs)
In particular, we achieve the state-of-the-art accuracy on Chalearn2013 gesture data by combining simple path transformation layers with the Logsig-RNN.
arXiv Detail & Related papers (2021-10-25T14:47:15Z) - Overcoming Catastrophic Forgetting in Graph Neural Networks [50.900153089330175]
Catastrophic forgetting refers to the tendency that a neural network "forgets" the previous learned knowledge upon learning new tasks.
We propose a novel scheme dedicated to overcoming this problem and hence strengthen continual learning in graph neural networks (GNNs)
At the heart of our approach is a generic module, termed as topology-aware weight preserving(TWP)
arXiv Detail & Related papers (2020-12-10T22:30:25Z) - Sequence Generation using Deep Recurrent Networks and Embeddings: A
study case in music [69.2737664640826]
This paper evaluates different types of memory mechanisms (memory cells) and analyses their performance in the field of music composition.
A set of quantitative metrics is presented to evaluate the performance of the proposed architecture automatically.
arXiv Detail & Related papers (2020-12-02T14:19:19Z) - GANs & Reels: Creating Irish Music using a Generative Adversarial
Network [2.6604997762611204]
We present a method for algorithmic melody generation using a generative adversarial network without recurrent components.
Music generation has been successfully done using recurrent neural networks, where the model learns sequence information that can help create authentic sounding melodies.
arXiv Detail & Related papers (2020-10-29T17:16:22Z) - Network Adjustment: Channel Search Guided by FLOPs Utilization Ratio [101.84651388520584]
This paper presents a new framework named network adjustment, which considers network accuracy as a function of FLOPs.
Experiments on standard image classification datasets and a wide range of base networks demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2020-04-06T15:51:00Z) - Encoding-based Memory Modules for Recurrent Neural Networks [79.42778415729475]
We study the memorization subtask from the point of view of the design and training of recurrent neural networks.
We propose a new model, the Linear Memory Network, which features an encoding-based memorization component built with a linear autoencoder for sequences.
arXiv Detail & Related papers (2020-01-31T11:14:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.