Research on AI Composition Recognition Based on Music Rules
- URL: http://arxiv.org/abs/2010.07805v1
- Date: Thu, 15 Oct 2020 14:51:24 GMT
- Title: Research on AI Composition Recognition Based on Music Rules
- Authors: Yang Deng, Ziyao Xu, Li Zhou, Huanping Liu, Anqi Huang
- Abstract summary: Article constructs a music-rule-identifying algorithm through extracting modes.
It will identify the stability of the mode of machine-generated music to judge whether it is artificial intelligent.
- Score: 7.699648754969773
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The development of artificial intelligent composition has resulted in the
increasing popularity of machine-generated pieces, with frequent copyright
disputes consequently emerging. There is an insufficient amount of research on
the judgement of artificial and machine-generated works; the creation of a
method to identify and distinguish these works is of particular importance.
Starting from the essence of the music, the article constructs a
music-rule-identifying algorithm through extracting modes, which will identify
the stability of the mode of machine-generated music, to judge whether it is
artificial intelligent. The evaluation datasets used are provided by the
Conference on Sound and Music Technology(CSMT). Experimental results
demonstrate the algorithm to have a successful distinguishing ability between
datasets with different source distributions. The algorithm will also provide
some technological reference to the benign development of the music copyright
and artificial intelligent music.
Related papers
- Generating High-quality Symbolic Music Using Fine-grained Discriminators [42.200747558496055]
We propose to decouple the melody and rhythm from music, and design corresponding fine-grained discriminators to tackle the issues.
Specifically, equipped with a pitch augmentation strategy, the melody discriminator discerns the melody variations presented by the generated samples.
The rhythm discriminator, enhanced with bar-level relative positional encoding, focuses on the velocity of generated notes.
arXiv Detail & Related papers (2024-08-03T07:32:21Z) - Towards Assessing Data Replication in Music Generation with Music Similarity Metrics on Raw Audio [25.254669525489923]
We present a model-independent open evaluation method based on diverse audio music similarity metrics to assess data replication.
Our results show that the proposed methodology can estimate exact data replication with a proportion higher than 10%.
arXiv Detail & Related papers (2024-07-19T14:52:11Z) - MARBLE: Music Audio Representation Benchmark for Universal Evaluation [79.25065218663458]
We introduce the Music Audio Representation Benchmark for universaL Evaluation, termed MARBLE.
It aims to provide a benchmark for various Music Information Retrieval (MIR) tasks by defining a comprehensive taxonomy with four hierarchy levels, including acoustic, performance, score, and high-level description.
We then establish a unified protocol based on 14 tasks on 8 public-available datasets, providing a fair and standard assessment of representations of all open-sourced pre-trained models developed on music recordings as baselines.
arXiv Detail & Related papers (2023-06-18T12:56:46Z) - Quantized GAN for Complex Music Generation from Dance Videos [48.196705493763986]
We present Dance2Music-GAN (D2M-GAN), a novel adversarial multi-modal framework that generates musical samples conditioned on dance videos.
Our proposed framework takes dance video frames and human body motion as input, and learns to generate music samples that plausibly accompany the corresponding input.
arXiv Detail & Related papers (2022-04-01T17:53:39Z) - Bach Style Music Authoring System based on Deep Learning [0.0]
The research purpose of this paper is to design a Bach style music authoring system based on deep learning.
We use a LSTM neural network to train serialized and standardized music feature data.
We find the optimal LSTM model which can generate imitation of Bach music.
arXiv Detail & Related papers (2021-10-06T10:30:09Z) - Music Harmony Generation, through Deep Learning and Using a
Multi-Objective Evolutionary Algorithm [0.0]
This paper introduces a genetic multi-objective evolutionary optimization algorithm for the generation of polyphonic music.
One of the goals is the rules and regulations of music, which, along with the other two goals, including the scores of music experts and ordinary listeners, fits the cycle of evolution to get the most optimal response.
The results show that the proposed method is able to generate difficult and pleasant pieces with desired styles and lengths, along with harmonic sounds that follow the grammar while attracting the listener, at the same time.
arXiv Detail & Related papers (2021-02-16T05:05:54Z) - Sequence Generation using Deep Recurrent Networks and Embeddings: A
study case in music [69.2737664640826]
This paper evaluates different types of memory mechanisms (memory cells) and analyses their performance in the field of music composition.
A set of quantitative metrics is presented to evaluate the performance of the proposed architecture automatically.
arXiv Detail & Related papers (2020-12-02T14:19:19Z) - dMelodies: A Music Dataset for Disentanglement Learning [70.90415511736089]
We present a new symbolic music dataset that will help researchers demonstrate the efficacy of their algorithms on diverse domains.
This will also provide a means for evaluating algorithms specifically designed for music.
The dataset is large enough (approx. 1.3 million data points) to train and test deep networks for disentanglement learning.
arXiv Detail & Related papers (2020-07-29T19:20:07Z) - Artificial Musical Intelligence: A Survey [51.477064918121336]
Music has become an increasingly prevalent domain of machine learning and artificial intelligence research.
This article provides a definition of musical intelligence, introduces a taxonomy of its constituent components, and surveys the wide range of AI methods that can be, and have been, brought to bear in its pursuit.
arXiv Detail & Related papers (2020-06-17T04:46:32Z) - RL-Duet: Online Music Accompaniment Generation Using Deep Reinforcement
Learning [69.20460466735852]
This paper presents a deep reinforcement learning algorithm for online accompaniment generation.
The proposed algorithm is able to respond to the human part and generate a melodic, harmonic and diverse machine part.
arXiv Detail & Related papers (2020-02-08T03:53:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.