Concept-Based Techniques for "Musicologist-friendly" Explanations in a
Deep Music Classifier
- URL: http://arxiv.org/abs/2208.12485v2
- Date: Mon, 29 Aug 2022 09:43:32 GMT
- Title: Concept-Based Techniques for "Musicologist-friendly" Explanations in a
Deep Music Classifier
- Authors: Francesco Foscarin, Katharina Hoedt, Verena Praher, Arthur Flexer,
Gerhard Widmer
- Abstract summary: We focus on more human-friendly explanations based on high-level musical concepts.
Our research targets trained systems (post-hoc explanations) and explores two approaches.
We demonstrate both techniques on an existing symbolic composer classification system, showcase their potential, and highlight their intrinsic limitations.
- Score: 5.442298461804281
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Current approaches for explaining deep learning systems applied to musical
data provide results in a low-level feature space, e.g., by highlighting
potentially relevant time-frequency bins in a spectrogram or time-pitch bins in
a piano roll. This can be difficult to understand, particularly for
musicologists without technical knowledge. To address this issue, we focus on
more human-friendly explanations based on high-level musical concepts. Our
research targets trained systems (post-hoc explanations) and explores two
approaches: a supervised one, where the user can define a musical concept and
test if it is relevant to the system; and an unsupervised one, where musical
excerpts containing relevant concepts are automatically selected and given to
the user for interpretation. We demonstrate both techniques on an existing
symbolic composer classification system, showcase their potential, and
highlight their intrinsic limitations.
Related papers
- Attention-guided Spectrogram Sequence Modeling with CNNs for Music Genre Classification [0.0]
We present an innovative model for classifying music genres using attention-based temporal signature modeling.
Our approach captures the most temporally significant moments within each piece, crafting a unique "signature" for genre identification.
This work bridges the gap between technical classification tasks and the nuanced, human experience of genre.
arXiv Detail & Related papers (2024-11-18T21:57:03Z) - A Survey of Foundation Models for Music Understanding [60.83532699497597]
This work is one of the early reviews of the intersection of AI techniques and music understanding.
We investigated, analyzed, and tested recent large-scale music foundation models in respect of their music comprehension abilities.
arXiv Detail & Related papers (2024-09-15T03:34:14Z) - Towards Explainable and Interpretable Musical Difficulty Estimation: A Parameter-efficient Approach [49.2787113554916]
Estimating music piece difficulty is important for organizing educational music collections.
Our work employs explainable descriptors for difficulty estimation in symbolic music representations.
Our approach, evaluated in piano repertoire categorized in 9 classes, achieved 41.4% accuracy independently, with a mean squared error (MSE) of 1.7.
arXiv Detail & Related papers (2024-08-01T11:23:42Z) - Music Genre Classification with ResNet and Bi-GRU Using Visual
Spectrograms [4.354842354272412]
The limitations of manual genre classification have highlighted the need for a more advanced system.
Traditional machine learning techniques have shown potential in genre classification, but fail to capture the full complexity of music data.
This study proposes a novel approach using visual spectrograms as input, and propose a hybrid model that combines the strength of the Residual neural Network (ResNet) and the Gated Recurrent Unit (GRU)
arXiv Detail & Related papers (2023-07-20T11:10:06Z) - Pitchclass2vec: Symbolic Music Structure Segmentation with Chord
Embeddings [0.8701566919381222]
We present a novel music segmentation method, pitchclass2vec, based on symbolic chord annotations.
Our algorithm is based on long-short term memory (LSTM) neural network and outperforms the state-of-the-art techniques based on symbolic chord annotations in the field.
arXiv Detail & Related papers (2023-03-24T10:23:15Z) - Learning Unsupervised Hierarchies of Audio Concepts [13.400413055847084]
In computer vision, concept learning was proposed to adjust explanations to the right abstraction level.
In this paper, we adapt concept learning to the realm of music, with its particularities.
We propose a method to learn numerous music concepts from audio and then automatically hierarchise them to expose their mutual relationships.
arXiv Detail & Related papers (2022-07-21T16:34:31Z) - MusicBERT: Symbolic Music Understanding with Large-Scale Pre-Training [97.91071692716406]
Symbolic music understanding refers to the understanding of music from the symbolic data.
MusicBERT is a large-scale pre-trained model for music understanding.
arXiv Detail & Related papers (2021-06-10T10:13:05Z) - Bach or Mock? A Grading Function for Chorales in the Style of J.S. Bach [74.09517278785519]
We introduce a grading function that evaluates four-part chorales in the style of J.S. Bach along important musical features.
We show that the function is both interpretable and outperforms human experts at discriminating Bach chorales from model-generated ones.
arXiv Detail & Related papers (2020-06-23T21:02:55Z) - Music Gesture for Visual Sound Separation [121.36275456396075]
"Music Gesture" is a keypoint-based structured representation to explicitly model the body and finger movements of musicians when they perform music.
We first adopt a context-aware graph network to integrate visual semantic context with body dynamics, and then apply an audio-visual fusion model to associate body movements with the corresponding audio signals.
arXiv Detail & Related papers (2020-04-20T17:53:46Z) - Multi-Modal Music Information Retrieval: Augmenting Audio-Analysis with
Visual Computing for Improved Music Video Analysis [91.3755431537592]
This thesis combines audio-analysis with computer vision to approach Music Information Retrieval (MIR) tasks from a multi-modal perspective.
The main hypothesis of this work is based on the observation that certain expressive categories such as genre or theme can be recognized on the basis of the visual content alone.
The experiments are conducted for three MIR tasks Artist Identification, Music Genre Classification and Cross-Genre Classification.
arXiv Detail & Related papers (2020-02-01T17:57:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.