Complex Network-Based Approach for Feature Extraction and Classification
of Musical Genres
- URL: http://arxiv.org/abs/2110.04654v1
- Date: Sat, 9 Oct 2021 22:23:33 GMT
- Title: Complex Network-Based Approach for Feature Extraction and Classification
of Musical Genres
- Authors: Matheus Henrique Pimenta-Zanon and Glaucia Maria Bressan and
Fabr\'icio Martins Lopes
- Abstract summary: This work presents a feature extraction method for the automatic classification of musical genres.
The proposed method initially converts the musics into sequences of musical notes and then maps the sequences as complex networks.
Topological measurements are extracted to characterize the network topology, which composes a feature vector that applies to the classification of musical genres.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Musical genre's classification has been a relevant research topic. The
association between music and genres is fundamental for the media industry,
which manages musical recommendation systems, and for music streaming services,
which may appear classified by genres. In this context, this work presents a
feature extraction method for the automatic classification of musical genres,
based on complex networks and their topological measurements. The proposed
method initially converts the musics into sequences of musical notes and then
maps the sequences as complex networks. Topological measurements are extracted
to characterize the network topology, which composes a feature vector that
applies to the classification of musical genres. The method was evaluated in
the classification of 10 musical genres by adopting the GTZAN dataset and 8
musical genres by adopting the FMA dataset. The results were compared with
methods in the literature. The proposed method outperformed all compared
methods by presenting high accuracy and low standard deviation, showing its
suitability for the musical genre's classification, which contributes to the
media industry in the automatic classification with assertiveness and
robustness. The proposed method is implemented in an open source in the Python
language and freely available at https://github.com/omatheuspimenta/examinner.
Related papers
- Attention-guided Spectrogram Sequence Modeling with CNNs for Music Genre Classification [0.0]
We present an innovative model for classifying music genres using attention-based temporal signature modeling.
Our approach captures the most temporally significant moments within each piece, crafting a unique "signature" for genre identification.
This work bridges the gap between technical classification tasks and the nuanced, human experience of genre.
arXiv Detail & Related papers (2024-11-18T21:57:03Z) - Music Genre Classification using Large Language Models [50.750620612351284]
This paper exploits the zero-shot capabilities of pre-trained large language models (LLMs) for music genre classification.
The proposed approach splits audio signals into 20 ms chunks and processes them through convolutional feature encoders.
During inference, predictions on individual chunks are aggregated for a final genre classification.
arXiv Detail & Related papers (2024-10-10T19:17:56Z) - Music Genre Classification: Training an AI model [0.0]
Music genre classification is an area that utilizes machine learning models and techniques for the processing of audio signals.
In this research I explore various machine learning algorithms for the purpose of music genre classification, using features extracted from audio signals.
I aim to asses the robustness of machine learning models for genre classification, and to compare their results.
arXiv Detail & Related papers (2024-05-23T23:07:01Z) - Music Genre Classification with ResNet and Bi-GRU Using Visual
Spectrograms [4.354842354272412]
The limitations of manual genre classification have highlighted the need for a more advanced system.
Traditional machine learning techniques have shown potential in genre classification, but fail to capture the full complexity of music data.
This study proposes a novel approach using visual spectrograms as input, and propose a hybrid model that combines the strength of the Residual neural Network (ResNet) and the Gated Recurrent Unit (GRU)
arXiv Detail & Related papers (2023-07-20T11:10:06Z) - MATT: A Multiple-instance Attention Mechanism for Long-tail Music Genre
Classification [1.8275108630751844]
Imbalanced music genre classification is a crucial task in the Music Information Retrieval (MIR) field.
Most of the existing models are designed for class-balanced music datasets.
We propose a novel mechanism named Multi-instance Attention (MATT) to boost the performance for identifying tail classes.
arXiv Detail & Related papers (2022-09-09T03:52:44Z) - A Study on Broadcast Networks for Music Genre Classification [0.0]
We study the broadcast-based neural networks aiming to improve the localization and generalizability under a small set of parameters.
Our approach offers insights and the potential to enable compact and generalizable broadcast networks for music and audio classification.
arXiv Detail & Related papers (2022-08-25T13:36:43Z) - Genre-conditioned Acoustic Models for Automatic Lyrics Transcription of
Polyphonic Music [73.73045854068384]
We propose to transcribe the lyrics of polyphonic music using a novel genre-conditioned network.
The proposed network adopts pre-trained model parameters, and incorporates the genre adapters between layers to capture different genre peculiarities for lyrics-genre pairs.
Our experiments show that the proposed genre-conditioned network outperforms the existing lyrics transcription systems.
arXiv Detail & Related papers (2022-04-07T09:15:46Z) - Sequence Generation using Deep Recurrent Networks and Embeddings: A
study case in music [69.2737664640826]
This paper evaluates different types of memory mechanisms (memory cells) and analyses their performance in the field of music composition.
A set of quantitative metrics is presented to evaluate the performance of the proposed architecture automatically.
arXiv Detail & Related papers (2020-12-02T14:19:19Z) - Melody-Conditioned Lyrics Generation with SeqGANs [81.2302502902865]
We propose an end-to-end melody-conditioned lyrics generation system based on Sequence Generative Adversarial Networks (SeqGAN)
We show that the input conditions have no negative impact on the evaluation metrics while enabling the network to produce more meaningful results.
arXiv Detail & Related papers (2020-10-28T02:35:40Z) - Fine-Grained Visual Classification with Efficient End-to-end
Localization [49.9887676289364]
We present an efficient localization module that can be fused with a classification network in an end-to-end setup.
We evaluate the new model on the three benchmark datasets CUB200-2011, Stanford Cars and FGVC-Aircraft.
arXiv Detail & Related papers (2020-05-11T14:07:06Z) - RL-Duet: Online Music Accompaniment Generation Using Deep Reinforcement
Learning [69.20460466735852]
This paper presents a deep reinforcement learning algorithm for online accompaniment generation.
The proposed algorithm is able to respond to the human part and generate a melodic, harmonic and diverse machine part.
arXiv Detail & Related papers (2020-02-08T03:53:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.