Music Harmony Generation, through Deep Learning and Using a
Multi-Objective Evolutionary Algorithm
- URL: http://arxiv.org/abs/2102.07960v1
- Date: Tue, 16 Feb 2021 05:05:54 GMT
- Title: Music Harmony Generation, through Deep Learning and Using a
Multi-Objective Evolutionary Algorithm
- Authors: Maryam Majidi and Rahil Mahdian Toroghi
- Abstract summary: This paper introduces a genetic multi-objective evolutionary optimization algorithm for the generation of polyphonic music.
One of the goals is the rules and regulations of music, which, along with the other two goals, including the scores of music experts and ordinary listeners, fits the cycle of evolution to get the most optimal response.
The results show that the proposed method is able to generate difficult and pleasant pieces with desired styles and lengths, along with harmonic sounds that follow the grammar while attracting the listener, at the same time.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Automatic music generation has become an epicenter research topic for many
scientists in artificial intelligence, who are also interested in the music
industry. Being a balanced combination of math and art, music in collaboration
with A.I. can simplify the generation process for new musical pieces, and ease
the interpretation of it to a tangible level. On the other hand, the artistic
nature of music and its mingling with the senses and feelings of the composer
makes the artificial generation and mathematical modeling of it infeasible. In
fact, there are no clear evaluation measures that can combine the objective
music grammar and structure with the subjective audience satisfaction goal.
Also, original music contains different elements that it is inevitable to put
together. Therefore, in this paper, a method based on a genetic multi-objective
evolutionary optimization algorithm for the generation of polyphonic music
(melody with rhythm and harmony or appropriate chords) is introduced in which
three specific goals determine the qualifications of the music generated. One
of the goals is the rules and regulations of music, which, along with the other
two goals, including the scores of music experts and ordinary listeners, fits
the cycle of evolution to get the most optimal response. The scoring of experts
and listeners separately is modeled using a Bi-LSTM neural network and has been
incorporated in the fitness function of the algorithm. The results show that
the proposed method is able to generate difficult and pleasant pieces with
desired styles and lengths, along with harmonic sounds that follow the grammar
while attracting the listener, at the same time.
Related papers
- A Survey of Foundation Models for Music Understanding [60.83532699497597]
This work is one of the early reviews of the intersection of AI techniques and music understanding.
We investigated, analyzed, and tested recent large-scale music foundation models in respect of their music comprehension abilities.
arXiv Detail & Related papers (2024-09-15T03:34:14Z) - MuDiT & MuSiT: Alignment with Colloquial Expression in Description-to-Song Generation [18.181382408551574]
We propose a novel task of Colloquial Description-to-Song Generation.
It focuses on aligning the generated content with colloquial human expressions.
This task is aimed at bridging the gap between colloquial language understanding and auditory expression within an AI model.
arXiv Detail & Related papers (2024-07-03T15:12:36Z) - MeLFusion: Synthesizing Music from Image and Language Cues using Diffusion Models [57.47799823804519]
We are inspired by how musicians compose music not just from a movie script, but also through visualizations.
We propose MeLFusion, a model that can effectively use cues from a textual description and the corresponding image to synthesize music.
Our exhaustive experimental evaluation suggests that adding visual information to the music synthesis pipeline significantly improves the quality of generated music.
arXiv Detail & Related papers (2024-06-07T06:38:59Z) - ComposerX: Multi-Agent Symbolic Music Composition with LLMs [51.68908082829048]
Music composition is a complex task that requires abilities to understand and generate information with long dependency and harmony constraints.
Current LLMs easily fail in this task, generating ill-written music even when equipped with modern techniques like In-Context-Learning and Chain-of-Thoughts.
We propose ComposerX, an agent-based symbolic music generation framework.
arXiv Detail & Related papers (2024-04-28T06:17:42Z) - Contrastive Learning with Positive-Negative Frame Mask for Music
Representation [91.44187939465948]
This paper proposes a novel Positive-nEgative frame mask for Music Representation based on the contrastive learning framework, abbreviated as PEMR.
We devise a novel contrastive learning objective to accommodate both self-augmented positives/negatives sampled from the same music.
arXiv Detail & Related papers (2022-03-17T07:11:42Z) - Structure-Enhanced Pop Music Generation via Harmony-Aware Learning [20.06867705303102]
We propose to leverage harmony-aware learning for structure-enhanced pop music generation.
Results of subjective and objective evaluations demonstrate that Harmony-Aware Hierarchical Music Transformer (HAT) significantly improves the quality of generated music.
arXiv Detail & Related papers (2021-09-14T05:04:13Z) - Musical Prosody-Driven Emotion Classification: Interpreting Vocalists
Portrayal of Emotions Through Machine Learning [0.0]
The role of musical prosody remains under-explored despite several studies demonstrating a strong connection between prosody and emotion.
In this study, we restrict the input of traditional machine learning algorithms to the features of musical prosody.
We utilize a methodology for individual data collection from vocalists, and personal ground truth labeling by the artist themselves.
arXiv Detail & Related papers (2021-06-04T15:40:19Z) - Research on AI Composition Recognition Based on Music Rules [7.699648754969773]
Article constructs a music-rule-identifying algorithm through extracting modes.
It will identify the stability of the mode of machine-generated music to judge whether it is artificial intelligent.
arXiv Detail & Related papers (2020-10-15T14:51:24Z) - Adaptive music: Automated music composition and distribution [0.0]
We present Melomics: an algorithmic composition method based on evolutionary search.
The system has exhibited a high creative power and versatility to produce music of different types.
It has also enabled the emergence of a set of completely novel applications.
arXiv Detail & Related papers (2020-07-25T09:38:06Z) - Music Gesture for Visual Sound Separation [121.36275456396075]
"Music Gesture" is a keypoint-based structured representation to explicitly model the body and finger movements of musicians when they perform music.
We first adopt a context-aware graph network to integrate visual semantic context with body dynamics, and then apply an audio-visual fusion model to associate body movements with the corresponding audio signals.
arXiv Detail & Related papers (2020-04-20T17:53:46Z) - RL-Duet: Online Music Accompaniment Generation Using Deep Reinforcement
Learning [69.20460466735852]
This paper presents a deep reinforcement learning algorithm for online accompaniment generation.
The proposed algorithm is able to respond to the human part and generate a melodic, harmonic and diverse machine part.
arXiv Detail & Related papers (2020-02-08T03:53:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.