Deep Composer Classification Using Symbolic Representation
- URL: http://arxiv.org/abs/2010.00823v2
- Date: Mon, 26 Oct 2020 14:03:26 GMT
- Title: Deep Composer Classification Using Symbolic Representation
- Authors: Sunghyeon Kim, Hyeyoon Lee, Sunjong Park, Jinho Lee, Keunwoo Choi
- Abstract summary: In this study, we train deep neural networks to classify composer on a symbolic domain.
The model takes a two-channel two-dimensional input, which is converted from MIDI recordings and performs a single-label classification.
On the experiments conducted on MAESTRO dataset, we report an F1 value of 0.8333 for the classification of 13classical composers.
- Score: 6.656753488329095
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this study, we train deep neural networks to classify composer on a
symbolic domain. The model takes a two-channel two-dimensional input, i.e.,
onset and note activations of time-pitch representation, which is converted
from MIDI recordings and performs a single-label classification. On the
experiments conducted on MAESTRO dataset, we report an F1 value of 0.8333 for
the classification of 13~classical composers.
Related papers
- Arabic Music Classification and Generation using Deep Learning [1.4721222689583375]
This paper proposes a machine learning approach for classifying classical and new Egyptian music by composer and generating new similar music.
The proposed system utilizes a convolutional neural network (CNN) for classification and a CNN autoencoder for generation.
The model 81.4% accuracy in classifying the music by composer, demonstrating the effectiveness of the proposed approach.
arXiv Detail & Related papers (2024-10-25T17:47:08Z) - Music Genre Classification using Large Language Models [50.750620612351284]
This paper exploits the zero-shot capabilities of pre-trained large language models (LLMs) for music genre classification.
The proposed approach splits audio signals into 20 ms chunks and processes them through convolutional feature encoders.
During inference, predictions on individual chunks are aggregated for a final genre classification.
arXiv Detail & Related papers (2024-10-10T19:17:56Z) - Toward a More Complete OMR Solution [49.74172035862698]
Optical music recognition aims to convert music notation into digital formats.
One approach to tackle OMR is through a multi-stage pipeline, where the system first detects visual music notation elements in the image.
We introduce a music object detector based on YOLOv8, which improves detection performance.
Second, we introduce a supervised training pipeline that completes the notation assembly stage based on detection output.
arXiv Detail & Related papers (2024-08-31T01:09:12Z) - Optical Music Recognition in Manuscripts from the Ricordi Archive [6.274767633959002]
Ricordi archive, a prestigious collection of significant musical manuscripts from renowned opera composers such as Donizetti, Verdi and Puccini, has been digitized.
We have automatically extracted samples that represent various musical elements depicted on the manuscripts, including notes, staves, clefs, erasures, and composer's annotations.
We trained multiple neural network-based classifiers to differentiate between the identified music elements.
arXiv Detail & Related papers (2024-08-14T09:29:11Z) - Learning Hierarchical Metrical Structure Beyond Measures [3.7294116330265394]
hierarchical structure annotations are helpful for music information retrieval and computer musicology.
We propose a data-driven approach to automatically extract hierarchical metrical structures from scores.
We show by experiments that the proposed method performs better than the rule-based approach under different orchestration settings.
arXiv Detail & Related papers (2022-09-21T11:08:52Z) - Symphony Generation with Permutation Invariant Language Model [57.75739773758614]
We present a symbolic symphony music generation solution, SymphonyNet, based on a permutation invariant language model.
A novel transformer decoder architecture is introduced as backbone for modeling extra-long sequences of symphony tokens.
Our empirical results show that our proposed approach can generate coherent, novel, complex and harmonious symphony compared to human composition.
arXiv Detail & Related papers (2022-05-10T13:08:49Z) - BERT-like Pre-training for Symbolic Piano Music Classification Tasks [15.02723006489356]
This article presents a benchmark study of symbolic piano music classification using the Bidirectional Representations from Transformers (BERT) approach.
We pre-train two 12-layer Transformer models using the BERT approach and fine-tune them for four downstream classification tasks.
Our evaluation shows that the BERT approach leads to higher classification accuracy than recurrent neural network (RNN)-based baselines.
arXiv Detail & Related papers (2021-07-12T07:03:57Z) - Sequence Generation using Deep Recurrent Networks and Embeddings: A
study case in music [69.2737664640826]
This paper evaluates different types of memory mechanisms (memory cells) and analyses their performance in the field of music composition.
A set of quantitative metrics is presented to evaluate the performance of the proposed architecture automatically.
arXiv Detail & Related papers (2020-12-02T14:19:19Z) - Large-Scale MIDI-based Composer Classification [13.815200249190529]
We propose large-scale MIDI based composer classification systems using GiantMIDI-Piano.
We are the first to investigate the composer classification problem with up to 100 composers.
Our system achieves a 10-composer and a 100-composer classification accuracies of 0.648 and 0.385.
arXiv Detail & Related papers (2020-10-28T08:07:55Z) - Score-informed Networks for Music Performance Assessment [64.12728872707446]
Deep neural network-based methods incorporating score information into MPA models have not yet been investigated.
We introduce three different models capable of score-informed performance assessment.
arXiv Detail & Related papers (2020-08-01T07:46:24Z) - dMelodies: A Music Dataset for Disentanglement Learning [70.90415511736089]
We present a new symbolic music dataset that will help researchers demonstrate the efficacy of their algorithms on diverse domains.
This will also provide a means for evaluating algorithms specifically designed for music.
The dataset is large enough (approx. 1.3 million data points) to train and test deep networks for disentanglement learning.
arXiv Detail & Related papers (2020-07-29T19:20:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.