Adaptive music: Automated music composition and distribution
- URL: http://arxiv.org/abs/2008.04415v2
- Date: Tue, 25 Jan 2022 11:14:00 GMT
- Title: Adaptive music: Automated music composition and distribution
- Authors: David Daniel Albarrac\'in Molina
- Abstract summary: We present Melomics: an algorithmic composition method based on evolutionary search.
The system has exhibited a high creative power and versatility to produce music of different types.
It has also enabled the emergence of a set of completely novel applications.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Creativity, or the ability to produce new useful ideas, is commonly
associated to the human being; but there are many other examples in nature
where this phenomenon can be observed. Inspired by this fact, in engineering
and particularly in computational sciences, many different models have been
developed to tackle a number of problems.
Composing music, a form of art broadly present along the human history, is
the main topic addressed in this thesis. Taking advantage of the kind of ideas
that bring diversity and creativity to nature and computation, we present
Melomics: an algorithmic composition method based on evolutionary search. The
solutions have a genetic encoding based on formal grammars and these are
interpreted in a complex developmental process followed by a fitness
assessment, to produce valid music compositions in standard formats.
The system has exhibited a high creative power and versatility to produce
music of different types and it has been tested, proving on many occasions the
outcome to be indistinguishable from the music made by human composers. The
system has also enabled the emergence of a set of completely novel
applications: from effective tools to help anyone to easily obtain the precise
music that they need, to radically new uses, such as adaptive music for
therapy, exercise, amusement and many others. It seems clear that automated
composition is an active research area and that countless new uses will be
discovered.
Related papers
- A Survey of Foundation Models for Music Understanding [60.83532699497597]
This work is one of the early reviews of the intersection of AI techniques and music understanding.
We investigated, analyzed, and tested recent large-scale music foundation models in respect of their music comprehension abilities.
arXiv Detail & Related papers (2024-09-15T03:34:14Z) - MeLFusion: Synthesizing Music from Image and Language Cues using Diffusion Models [57.47799823804519]
We are inspired by how musicians compose music not just from a movie script, but also through visualizations.
We propose MeLFusion, a model that can effectively use cues from a textual description and the corresponding image to synthesize music.
Our exhaustive experimental evaluation suggests that adding visual information to the music synthesis pipeline significantly improves the quality of generated music.
arXiv Detail & Related papers (2024-06-07T06:38:59Z) - ComposerX: Multi-Agent Symbolic Music Composition with LLMs [51.68908082829048]
Music composition is a complex task that requires abilities to understand and generate information with long dependency and harmony constraints.
Current LLMs easily fail in this task, generating ill-written music even when equipped with modern techniques like In-Context-Learning and Chain-of-Thoughts.
We propose ComposerX, an agent-based symbolic music generation framework.
arXiv Detail & Related papers (2024-04-28T06:17:42Z) - Models of Music Cognition and Composition [0.0]
We first motivate why music is relevant to cognitive scientists and give an overview of the approaches to computational modelling of music cognition.
We then review literature on the various models of music perception, including non-computational models, computational non-cognitive models and computational cognitive models.
arXiv Detail & Related papers (2022-08-14T16:27:59Z) - Quantized GAN for Complex Music Generation from Dance Videos [48.196705493763986]
We present Dance2Music-GAN (D2M-GAN), a novel adversarial multi-modal framework that generates musical samples conditioned on dance videos.
Our proposed framework takes dance video frames and human body motion as input, and learns to generate music samples that plausibly accompany the corresponding input.
arXiv Detail & Related papers (2022-04-01T17:53:39Z) - Music Composition with Deep Learning: A Review [1.7188280334580197]
We analyze the ability of current Deep Learning models to generate music with creativity.
We compare these models to the music composition process from a theoretical point of view.
arXiv Detail & Related papers (2021-08-27T13:53:53Z) - Personalized Popular Music Generation Using Imitation and Structure [1.971709238332434]
We propose a statistical machine learning model that is able to capture and imitate the structure, melody, chord, and bass style from a given example seed song.
An evaluation using 10 pop songs shows that our new representations and methods are able to create high-quality stylistic music.
arXiv Detail & Related papers (2021-05-10T23:43:00Z) - Music Embedding: A Tool for Incorporating Music Theory into
Computational Music Applications [0.3553493344868413]
It is important to digitally represent music in a music theoretic and concise manner.
Existing approaches for representing music are ineffective in terms of utilizing music theory.
arXiv Detail & Related papers (2021-04-24T04:32:45Z) - Music Harmony Generation, through Deep Learning and Using a
Multi-Objective Evolutionary Algorithm [0.0]
This paper introduces a genetic multi-objective evolutionary optimization algorithm for the generation of polyphonic music.
One of the goals is the rules and regulations of music, which, along with the other two goals, including the scores of music experts and ordinary listeners, fits the cycle of evolution to get the most optimal response.
The results show that the proposed method is able to generate difficult and pleasant pieces with desired styles and lengths, along with harmonic sounds that follow the grammar while attracting the listener, at the same time.
arXiv Detail & Related papers (2021-02-16T05:05:54Z) - Artificial Musical Intelligence: A Survey [51.477064918121336]
Music has become an increasingly prevalent domain of machine learning and artificial intelligence research.
This article provides a definition of musical intelligence, introduces a taxonomy of its constituent components, and surveys the wide range of AI methods that can be, and have been, brought to bear in its pursuit.
arXiv Detail & Related papers (2020-06-17T04:46:32Z) - RL-Duet: Online Music Accompaniment Generation Using Deep Reinforcement
Learning [69.20460466735852]
This paper presents a deep reinforcement learning algorithm for online accompaniment generation.
The proposed algorithm is able to respond to the human part and generate a melodic, harmonic and diverse machine part.
arXiv Detail & Related papers (2020-02-08T03:53:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.