Music Composition with Deep Learning: A Review
- URL: http://arxiv.org/abs/2108.12290v1
- Date: Fri, 27 Aug 2021 13:53:53 GMT
- Title: Music Composition with Deep Learning: A Review
- Authors: Carlos Hernandez-Olivan, Jose R. Beltran
- Abstract summary: We analyze the ability of current Deep Learning models to generate music with creativity.
We compare these models to the music composition process from a theoretical point of view.
- Score: 1.7188280334580197
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Generating a complex work of art such as a musical composition requires
exhibiting true creativity that depends on a variety of factors that are
related to the hierarchy of musical language. Music generation have been faced
with Algorithmic methods and recently, with Deep Learning models that are being
used in other fields such as Computer Vision. In this paper we want to put into
context the existing relationships between AI-based music composition models
and human musical composition and creativity processes. We give an overview of
the recent Deep Learning models for music composition and we compare these
models to the music composition process from a theoretical point of view. We
have tried to answer some of the most relevant open questions for this task by
analyzing the ability of current Deep Learning models to generate music with
creativity or the similarity between AI and human composition processes, among
others.
Related papers
- A Survey of Foundation Models for Music Understanding [60.83532699497597]
This work is one of the early reviews of the intersection of AI techniques and music understanding.
We investigated, analyzed, and tested recent large-scale music foundation models in respect of their music comprehension abilities.
arXiv Detail & Related papers (2024-09-15T03:34:14Z) - Foundation Models for Music: A Survey [77.77088584651268]
Foundations models (FMs) have profoundly impacted diverse sectors, including music.
This comprehensive review examines state-of-the-art (SOTA) pre-trained models and foundation models in music.
arXiv Detail & Related papers (2024-08-26T15:13:14Z) - MeLFusion: Synthesizing Music from Image and Language Cues using Diffusion Models [57.47799823804519]
We are inspired by how musicians compose music not just from a movie script, but also through visualizations.
We propose MeLFusion, a model that can effectively use cues from a textual description and the corresponding image to synthesize music.
Our exhaustive experimental evaluation suggests that adding visual information to the music synthesis pipeline significantly improves the quality of generated music.
arXiv Detail & Related papers (2024-06-07T06:38:59Z) - ComposerX: Multi-Agent Symbolic Music Composition with LLMs [51.68908082829048]
Music composition is a complex task that requires abilities to understand and generate information with long dependency and harmony constraints.
Current LLMs easily fail in this task, generating ill-written music even when equipped with modern techniques like In-Context-Learning and Chain-of-Thoughts.
We propose ComposerX, an agent-based symbolic music generation framework.
arXiv Detail & Related papers (2024-04-28T06:17:42Z) - Motifs, Phrases, and Beyond: The Modelling of Structure in Symbolic
Music Generation [2.8062498505437055]
Modelling musical structure is vital yet challenging for artificial intelligence systems that generate symbolic music compositions.
This literature review dissects the evolution of techniques for incorporating coherent structure.
We outline several key future directions to realize the synergistic benefits of combining approaches from all eras examined.
arXiv Detail & Related papers (2024-03-12T18:03:08Z) - ByteComposer: a Human-like Melody Composition Method based on Language
Model Agent [11.792129708566598]
Large Language Models (LLM) have shown encouraging progress in multimodal understanding and generation tasks.
We propose ByteComposer, an agent framework emulating a human's creative pipeline in four separate steps.
We conduct extensive experiments on GPT4 and several open-source large language models, which substantiate our framework's effectiveness.
arXiv Detail & Related papers (2024-02-24T04:35:07Z) - Models of Music Cognition and Composition [0.0]
We first motivate why music is relevant to cognitive scientists and give an overview of the approaches to computational modelling of music cognition.
We then review literature on the various models of music perception, including non-computational models, computational non-cognitive models and computational cognitive models.
arXiv Detail & Related papers (2022-08-14T16:27:59Z) - Music Harmony Generation, through Deep Learning and Using a
Multi-Objective Evolutionary Algorithm [0.0]
This paper introduces a genetic multi-objective evolutionary optimization algorithm for the generation of polyphonic music.
One of the goals is the rules and regulations of music, which, along with the other two goals, including the scores of music experts and ordinary listeners, fits the cycle of evolution to get the most optimal response.
The results show that the proposed method is able to generate difficult and pleasant pieces with desired styles and lengths, along with harmonic sounds that follow the grammar while attracting the listener, at the same time.
arXiv Detail & Related papers (2021-02-16T05:05:54Z) - Sequence Generation using Deep Recurrent Networks and Embeddings: A
study case in music [69.2737664640826]
This paper evaluates different types of memory mechanisms (memory cells) and analyses their performance in the field of music composition.
A set of quantitative metrics is presented to evaluate the performance of the proposed architecture automatically.
arXiv Detail & Related papers (2020-12-02T14:19:19Z) - Adaptive music: Automated music composition and distribution [0.0]
We present Melomics: an algorithmic composition method based on evolutionary search.
The system has exhibited a high creative power and versatility to produce music of different types.
It has also enabled the emergence of a set of completely novel applications.
arXiv Detail & Related papers (2020-07-25T09:38:06Z) - RL-Duet: Online Music Accompaniment Generation Using Deep Reinforcement
Learning [69.20460466735852]
This paper presents a deep reinforcement learning algorithm for online accompaniment generation.
The proposed algorithm is able to respond to the human part and generate a melodic, harmonic and diverse machine part.
arXiv Detail & Related papers (2020-02-08T03:53:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.