Regularized Contrastive Pre-training for Few-shot Bioacoustic Sound
Detection
- URL: http://arxiv.org/abs/2309.08971v2
- Date: Wed, 17 Jan 2024 11:35:33 GMT
- Title: Regularized Contrastive Pre-training for Few-shot Bioacoustic Sound
Detection
- Authors: Ilyass Moummad, Romain Serizel, Nicolas Farrugia
- Abstract summary: We regularize supervised contrastive pre-training to learn features that can transfer well on new target tasks with animal sounds unseen during training.
This work aims to lower the entry bar to few-shot bioacoustic sound event detection by proposing a simple and yet effective framework for this task.
- Score: 10.395255631261458
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Bioacoustic sound event detection allows for better understanding of animal
behavior and for better monitoring biodiversity using audio. Deep learning
systems can help achieve this goal, however it is difficult to acquire
sufficient annotated data to train these systems from scratch. To address this
limitation, the Detection and Classification of Acoustic Scenes and Events
(DCASE) community has recasted the problem within the framework of few-shot
learning and organize an annual challenge for learning to detect animal sounds
from only five annotated examples. In this work, we regularize supervised
contrastive pre-training to learn features that can transfer well on new target
tasks with animal sounds unseen during training, achieving a high F-score of
61.52%(0.48) when no feature adaptation is applied, and an F-score of
68.19%(0.75) when we further adapt the learned features for each new target
task. This work aims to lower the entry bar to few-shot bioacoustic sound event
detection by proposing a simple and yet effective framework for this task, by
also providing open-source code.
Related papers
- Multitask frame-level learning for few-shot sound event detection [46.32294691870714]
This paper focuses on few-shot Sound Event Detection (SED), which aims to automatically recognize and classify sound events with limited samples.
We introduce an innovative multitask frame-level SED framework and TimeFilterAug, a linear timing mask for data augmentation.
The proposed method achieves a F-score of 63.8%, securing the 1st rank in the few-shot bioacoustic event detection category.
arXiv Detail & Related papers (2024-03-17T05:00:40Z) - Pretraining Representations for Bioacoustic Few-shot Detection using
Supervised Contrastive Learning [10.395255631261458]
In bioacoustic applications, most tasks come with few labelled training data, because annotating long recordings is time consuming and costly.
We show that learning a rich feature extractor from scratch can be achieved by leveraging data augmentation using a supervised contrastive learning framework.
We obtain an F-score of 63.46% on the validation set and 42.7% on the test set, ranking second in the DCASE challenge.
arXiv Detail & Related papers (2023-09-02T09:38:55Z) - Segment-level Metric Learning for Few-shot Bioacoustic Event Detection [56.59107110017436]
We propose a segment-level few-shot learning framework that utilizes both the positive and negative events during model optimization.
Our system achieves an F-measure of 62.73 on the DCASE 2022 challenge task 5 (DCASE2022-T5) validation set, outperforming the performance of the baseline prototypical network 34.02 by a large margin.
arXiv Detail & Related papers (2022-07-15T22:41:30Z) - Few-shot bioacoustic event detection at the DCASE 2022 challenge [0.0]
Few-shot sound event detection is the task of detecting sound events despite having only a few labelled examples.
This paper presents an overview of the second edition of the few-shot bioacoustic sound event detection task included in the DCASE 2022 challenge.
The highest F-score was of 60% on the evaluation set, which leads to a huge improvement over last year's edition.
arXiv Detail & Related papers (2022-07-14T09:33:47Z) - Adaptive Few-Shot Learning Algorithm for Rare Sound Event Detection [24.385226516231004]
We propose a novel task-adaptive module which is easy to plant into any metric-based few-shot learning frameworks.
Our module improves the performance considerably on two datasets over baseline methods.
arXiv Detail & Related papers (2022-05-24T03:13:12Z) - Cross-Referencing Self-Training Network for Sound Event Detection in
Audio Mixtures [23.568610919253352]
This paper proposes a semi-supervised method for generating pseudo-labels from unsupervised data using a student-teacher scheme that balances self-training and cross-training.
The results of these methods on both "validation" and "public evaluation" sets of DESED database show significant improvement compared to the state-of-the art systems in semi-supervised learning.
arXiv Detail & Related papers (2021-05-27T18:46:59Z) - Discriminative Singular Spectrum Classifier with Applications on
Bioacoustic Signal Recognition [67.4171845020675]
We present a bioacoustic signal classifier equipped with a discriminative mechanism to extract useful features for analysis and classification efficiently.
Unlike current bioacoustic recognition methods, which are task-oriented, the proposed model relies on transforming the input signals into vector subspaces.
The validity of the proposed method is verified using three challenging bioacoustic datasets containing anuran, bee, and mosquito species.
arXiv Detail & Related papers (2021-03-18T11:01:21Z) - Searching for Robustness: Loss Learning for Noisy Classification Tasks [81.70914107917551]
We parameterize a flexible family of loss functions using Taylors and apply evolutionary strategies to search for noise-robust losses in this space.
The resulting white-box loss provides a simple and fast "plug-and-play" module that enables effective noise-robust learning in diverse downstream tasks.
arXiv Detail & Related papers (2021-02-27T15:27:22Z) - Speech Enhancement for Wake-Up-Word detection in Voice Assistants [60.103753056973815]
Keywords spotting and in particular Wake-Up-Word (WUW) detection is a very important task for voice assistants.
This paper proposes a Speech Enhancement model adapted to the task of WUW detection.
It aims at increasing the recognition rate and reducing the false alarms in the presence of these types of noises.
arXiv Detail & Related papers (2021-01-29T18:44:05Z) - Unsupervised Domain Adaptation for Acoustic Scene Classification Using
Band-Wise Statistics Matching [69.24460241328521]
Machine learning algorithms can be negatively affected by mismatches between training (source) and test (target) data distributions.
We propose an unsupervised domain adaptation method that consists of aligning the first- and second-order sample statistics of each frequency band of target-domain acoustic scenes to the ones of the source-domain training dataset.
We show that the proposed method outperforms the state-of-the-art unsupervised methods found in the literature in terms of both source- and target-domain classification accuracy.
arXiv Detail & Related papers (2020-04-30T23:56:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.