From Artificial Neural Networks to Deep Learning for Music Generation --
History, Concepts and Trends
- URL: http://arxiv.org/abs/2004.03586v2
- Date: Mon, 5 Oct 2020 22:33:16 GMT
- Title: From Artificial Neural Networks to Deep Learning for Music Generation --
History, Concepts and Trends
- Authors: Jean-Pierre Briot
- Abstract summary: This paper provides a tutorial on music generation based on deep learning techniques.
It analyzes some early works from the late 1980s using artificial neural networks for music generation.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The current wave of deep learning (the hyper-vitamined return of artificial
neural networks) applies not only to traditional statistical machine learning
tasks: prediction and classification (e.g., for weather prediction and pattern
recognition), but has already conquered other areas, such as translation. A
growing area of application is the generation of creative content, notably the
case of music, the topic of this paper. The motivation is in using the capacity
of modern deep learning techniques to automatically learn musical styles from
arbitrary musical corpora and then to generate musical samples from the
estimated distribution, with some degree of control over the generation. This
paper provides a tutorial on music generation based on deep learning
techniques. After a short introduction to the topic illustrated by a recent
exemple, the paper analyzes some early works from the late 1980s using
artificial neural networks for music generation and how their pioneering
contributions have prefigured current techniques. Then, we introduce some
conceptual framework to analyze the various concepts and dimensions involved.
Various examples of recent systems are introduced and analyzed to illustrate
the variety of concerns and of techniques.
Related papers
- A Survey of Music Generation in the Context of Interaction [3.6522809408725223]
Machine learning has been successfully used to compose and generate music, both melodies and polyphonic pieces.
Most of these models are not suitable for human-machine co-creation through live interaction.
arXiv Detail & Related papers (2024-02-23T12:41:44Z) - Relating Human Perception of Musicality to Prediction in a Predictive
Coding Model [0.8062120534124607]
We explore the use of a neural network inspired by predictive coding for modeling human music perception.
This network was developed based on the computational neuroscience theory of recurrent interactions in the hierarchical visual cortex.
We adapt this network to model the hierarchical auditory system and investigate whether it will make similar choices to humans regarding the musicality of a set of random pitch sequences.
arXiv Detail & Related papers (2022-10-29T12:20:01Z) - Sequence Generation using Deep Recurrent Networks and Embeddings: A
study case in music [69.2737664640826]
This paper evaluates different types of memory mechanisms (memory cells) and analyses their performance in the field of music composition.
A set of quantitative metrics is presented to evaluate the performance of the proposed architecture automatically.
arXiv Detail & Related papers (2020-12-02T14:19:19Z) - A Comprehensive Survey on Deep Music Generation: Multi-level
Representations, Algorithms, Evaluations, and Future Directions [10.179835761549471]
This paper attempts to provide an overview of various composition tasks under different music generation levels using deep learning.
In addition, we summarize datasets suitable for diverse tasks, discuss the music representations, the evaluation methods as well as the challenges under different levels, and finally point out several future directions.
arXiv Detail & Related papers (2020-11-13T08:01:20Z) - Melody-Conditioned Lyrics Generation with SeqGANs [81.2302502902865]
We propose an end-to-end melody-conditioned lyrics generation system based on Sequence Generative Adversarial Networks (SeqGAN)
We show that the input conditions have no negative impact on the evaluation metrics while enabling the network to produce more meaningful results.
arXiv Detail & Related papers (2020-10-28T02:35:40Z) - Understanding the Role of Individual Units in a Deep Neural Network [85.23117441162772]
We present an analytic framework to systematically identify hidden units within image classification and image generation networks.
First, we analyze a convolutional neural network (CNN) trained on scene classification and discover units that match a diverse set of object concepts.
Second, we use a similar analytic method to analyze a generative adversarial network (GAN) model trained to generate scenes.
arXiv Detail & Related papers (2020-09-10T17:59:10Z) - Detecting Generic Music Features with Single Layer Feedforward Network
using Unsupervised Hebbian Computation [3.8707695363745223]
The authors extract information on such features from a popular open-source music corpus.
They apply unsupervised Hebbian learning techniques on their single-layer neural network using the same dataset.
The unsupervised training algorithm enhances their proposed neural network to achieve an accuracy of 90.36% for successful music feature detection.
arXiv Detail & Related papers (2020-08-31T13:57:31Z) - Incorporating Music Knowledge in Continual Dataset Augmentation for
Music Generation [69.06413031969674]
Aug-Gen is a method of dataset augmentation for any music generation system trained on a resource-constrained domain.
We apply Aug-Gen to Transformer-based chorale generation in the style of J.S. Bach, and show that this allows for longer training and results in better generative output.
arXiv Detail & Related papers (2020-06-23T21:06:15Z) - Artificial Musical Intelligence: A Survey [51.477064918121336]
Music has become an increasingly prevalent domain of machine learning and artificial intelligence research.
This article provides a definition of musical intelligence, introduces a taxonomy of its constituent components, and surveys the wide range of AI methods that can be, and have been, brought to bear in its pursuit.
arXiv Detail & Related papers (2020-06-17T04:46:32Z) - Dance Revolution: Long-Term Dance Generation with Music via Curriculum
Learning [55.854205371307884]
We formalize the music-conditioned dance generation as a sequence-to-sequence learning problem.
We propose a novel curriculum learning strategy to alleviate error accumulation of autoregressive models in long motion sequence generation.
Our approach significantly outperforms the existing state-of-the-arts on automatic metrics and human evaluation.
arXiv Detail & Related papers (2020-06-11T00:08:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.