Music Instrument Classification Reprogrammed
- URL: http://arxiv.org/abs/2211.08379v1
- Date: Tue, 15 Nov 2022 18:26:01 GMT
- Title: Music Instrument Classification Reprogrammed
- Authors: Hsin-Hung Chen and Alexander Lerch
- Abstract summary: "Reprogramming" is a technique that utilizes pre-trained deep and complex neural networks originally targeting a different task by modifying and mapping both the input and output of the pre-trained model.
We demonstrate that reprogramming can effectively leverage the power of the representation learned for a different task and that the resulting reprogrammed system can perform on par or even outperform state-of-the-art systems at a fraction of training parameters.
- Score: 79.68916470119743
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The performance of approaches to Music Instrument Classification, a popular
task in Music Information Retrieval, is often impacted and limited by the lack
of availability of annotated data for training. We propose to address this
issue with "reprogramming," a technique that utilizes pre-trained deep and
complex neural networks originally targeting a different task by modifying and
mapping both the input and output of the pre-trained model. We demonstrate that
reprogramming can effectively leverage the power of the representation learned
for a different task and that the resulting reprogrammed system can perform on
par or even outperform state-of-the-art systems at a fraction of training
parameters. Our results, therefore, indicate that reprogramming is a promising
technique potentially applicable to other tasks impeded by data scarcity.
Related papers
- An Experimental Comparison Of Multi-view Self-supervised Methods For Music Tagging [6.363158395541767]
Self-supervised learning has emerged as a powerful way to pre-train generalizable machine learning models on large amounts of unlabeled data.
In this study, we investigate and compare the performance of new self-supervised methods for music tagging.
arXiv Detail & Related papers (2024-04-14T07:56:08Z) - Self-supervised Auxiliary Loss for Metric Learning in Music
Similarity-based Retrieval and Auto-tagging [0.0]
We propose a model that builds on the self-supervised learning approach to address the similarity-based retrieval challenge.
We also found that refraining from employing augmentation during the fine-tuning phase yields better results.
arXiv Detail & Related papers (2023-04-15T02:00:28Z) - Supervised and Unsupervised Learning of Audio Representations for Music
Understanding [9.239657838690226]
We show how the domain of pre-training datasets affects the adequacy of the resulting audio embeddings for downstream tasks.
We show that models trained via supervised learning on large-scale expert-annotated music datasets achieve state-of-the-art performance.
arXiv Detail & Related papers (2022-10-07T20:07:35Z) - Uni-Perceiver: Pre-training Unified Architecture for Generic Perception
for Zero-shot and Few-shot Tasks [73.63892022944198]
We present a generic perception architecture named Uni-Perceiver.
It processes a variety of modalities and tasks with unified modeling and shared parameters.
Results show that our pre-trained model without any tuning can achieve reasonable performance even on novel tasks.
arXiv Detail & Related papers (2021-12-02T18:59:50Z) - Improving Music Performance Assessment with Contrastive Learning [78.8942067357231]
This study investigates contrastive learning as a potential method to improve existing MPA systems.
We introduce a weighted contrastive loss suitable for regression tasks applied to a convolutional neural network.
Our results show that contrastive-based methods are able to match and exceed SoTA performance for MPA regression tasks.
arXiv Detail & Related papers (2021-08-03T19:24:25Z) - Searching for Robustness: Loss Learning for Noisy Classification Tasks [81.70914107917551]
We parameterize a flexible family of loss functions using Taylors and apply evolutionary strategies to search for noise-robust losses in this space.
The resulting white-box loss provides a simple and fast "plug-and-play" module that enables effective noise-robust learning in diverse downstream tasks.
arXiv Detail & Related papers (2021-02-27T15:27:22Z) - Multi-Task Self-Supervised Pre-Training for Music Classification [36.21650132145048]
We apply self-supervised and multi-task learning methods for pre-training music encoders.
We investigate how these design choices interact with various downstream music classification tasks.
arXiv Detail & Related papers (2021-02-05T15:19:58Z) - Self-Adaptive Training: Bridging the Supervised and Self-Supervised
Learning [16.765461276790944]
Self-adaptive training is a unified training algorithm that dynamically calibrates and enhances training process by model predictions without incurring extra computational cost.
We analyze the training dynamics of deep networks on training data corrupted by, e.g., random noise and adversarial examples.
Our analysis shows that model predictions are able to magnify useful underlying information in data and this phenomenon occurs broadly even in the absence of emphany label information.
arXiv Detail & Related papers (2021-01-21T17:17:30Z) - Parrot: Data-Driven Behavioral Priors for Reinforcement Learning [79.32403825036792]
We propose a method for pre-training behavioral priors that can capture complex input-output relationships observed in successful trials.
We show how this learned prior can be used for rapidly learning new tasks without impeding the RL agent's ability to try out novel behaviors.
arXiv Detail & Related papers (2020-11-19T18:47:40Z) - Multi-Stage Influence Function [97.19210942277354]
We develop a multi-stage influence function score to track predictions from a finetuned model all the way back to the pretraining data.
We study two different scenarios with the pretrained embeddings fixed or updated in the finetuning tasks.
arXiv Detail & Related papers (2020-07-17T16:03:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.