Few-shot bioacoustic event detection at the DCASE 2022 challenge
- URL: http://arxiv.org/abs/2207.07911v1
- Date: Thu, 14 Jul 2022 09:33:47 GMT
- Title: Few-shot bioacoustic event detection at the DCASE 2022 challenge
- Authors: I. Nolasco, S. Singh, E. Vidana-Villa, E. Grout, J. Morford, M.
Emmerson, F. Jensens, H. Whitehead, I. Kiskin, A. Strandburg-Peshkin, L.
Gill, H. Pamula, V. Lostanlen, V. Morfi, D. Stowell
- Abstract summary: Few-shot sound event detection is the task of detecting sound events despite having only a few labelled examples.
This paper presents an overview of the second edition of the few-shot bioacoustic sound event detection task included in the DCASE 2022 challenge.
The highest F-score was of 60% on the evaluation set, which leads to a huge improvement over last year's edition.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Few-shot sound event detection is the task of detecting sound events, despite
having only a few labelled examples of the class of interest. This framework is
particularly useful in bioacoustics, where often there is a need to annotate
very long recordings but the expert annotator time is limited. This paper
presents an overview of the second edition of the few-shot bioacoustic sound
event detection task included in the DCASE 2022 challenge. A detailed
description of the task objectives, dataset, and baselines is presented,
together with the main results obtained and characteristics of the submitted
systems. This task received submissions from 15 different teams from which 13
scored higher than the baselines. The highest F-score was of 60% on the
evaluation set, which leads to a huge improvement over last year's edition.
Highly-performing methods made use of prototypical networks, transductive
learning, and addressed the variable length of events from all target classes.
Furthermore, by analysing results on each of the subsets we can identify the
main difficulties that the systems face, and conclude that few-show bioacoustic
sound event detection remains an open challenge.
Related papers
- Can Large Audio-Language Models Truly Hear? Tackling Hallucinations with Multi-Task Assessment and Stepwise Audio Reasoning [55.2480439325792]
Large audio-language models (LALMs) have shown impressive capabilities in understanding and reasoning about audio and speech information.
These models still face challenges, including hallucinating non-existent sound events, misidentifying the order of sound events, and incorrectly attributing sound sources.
arXiv Detail & Related papers (2024-10-21T15:55:27Z) - Double Mixture: Towards Continual Event Detection from Speech [60.33088725100812]
Speech event detection is crucial for multimedia retrieval, involving the tagging of both semantic and acoustic events.
This paper tackles two primary challenges in speech event detection: the continual integration of new events without forgetting previous ones, and the disentanglement of semantic from acoustic events.
We propose a novel method, 'Double Mixture,' which merges speech expertise with robust memory mechanisms to enhance adaptability and prevent forgetting.
arXiv Detail & Related papers (2024-04-20T06:32:00Z) - Multitask frame-level learning for few-shot sound event detection [46.32294691870714]
This paper focuses on few-shot Sound Event Detection (SED), which aims to automatically recognize and classify sound events with limited samples.
We introduce an innovative multitask frame-level SED framework and TimeFilterAug, a linear timing mask for data augmentation.
The proposed method achieves a F-score of 63.8%, securing the 1st rank in the few-shot bioacoustic event detection category.
arXiv Detail & Related papers (2024-03-17T05:00:40Z) - Pretraining Representations for Bioacoustic Few-shot Detection using
Supervised Contrastive Learning [10.395255631261458]
In bioacoustic applications, most tasks come with few labelled training data, because annotating long recordings is time consuming and costly.
We show that learning a rich feature extractor from scratch can be achieved by leveraging data augmentation using a supervised contrastive learning framework.
We obtain an F-score of 63.46% on the validation set and 42.7% on the test set, ranking second in the DCASE challenge.
arXiv Detail & Related papers (2023-09-02T09:38:55Z) - Few-shot bioacoustic event detection at the DCASE 2023 challenge [5.769642475512074]
This task ran as part of the DCASE challenge for the third time this year with an evaluation set expanded to include new animal species.
The 2023 few shot task received submissions from 6 different teams with F-scores reaching as high as 63% on the evaluation set.
Not only have the F-score results steadily improved (40% to 60% to 63%), but the type of systems proposed have also become more complex.
arXiv Detail & Related papers (2023-06-15T15:59:26Z) - Robust, General, and Low Complexity Acoustic Scene Classification
Systems and An Effective Visualization for Presenting a Sound Scene Context [53.80051967863102]
We present a comprehensive analysis of Acoustic Scene Classification (ASC)
We propose an inception-based and low footprint ASC model, referred to as the ASC baseline.
Next, we improve the ASC baseline by proposing a novel deep neural network architecture.
arXiv Detail & Related papers (2022-10-16T19:07:21Z) - Segment-level Metric Learning for Few-shot Bioacoustic Event Detection [56.59107110017436]
We propose a segment-level few-shot learning framework that utilizes both the positive and negative events during model optimization.
Our system achieves an F-measure of 62.73 on the DCASE 2022 challenge task 5 (DCASE2022-T5) validation set, outperforming the performance of the baseline prototypical network 34.02 by a large margin.
arXiv Detail & Related papers (2022-07-15T22:41:30Z) - Joint-Modal Label Denoising for Weakly-Supervised Audio-Visual Video
Parsing [52.2231419645482]
This paper focuses on the weakly-supervised audio-visual video parsing task.
It aims to recognize all events belonging to each modality and localize their temporal boundaries.
arXiv Detail & Related papers (2022-04-25T11:41:17Z) - A benchmark of state-of-the-art sound event detection systems evaluated
on synthetic soundscapes [10.512055210540668]
We study the solutions proposed by participants to analyze their robustness to varying level target to non-target signal-to-noise ratio and to temporal localization of target sound events.
Results show that systems tend to spuriously predict short events when non-target events are present.
arXiv Detail & Related papers (2022-02-03T09:41:31Z) - Cross-Referencing Self-Training Network for Sound Event Detection in
Audio Mixtures [23.568610919253352]
This paper proposes a semi-supervised method for generating pseudo-labels from unsupervised data using a student-teacher scheme that balances self-training and cross-training.
The results of these methods on both "validation" and "public evaluation" sets of DESED database show significant improvement compared to the state-of-the art systems in semi-supervised learning.
arXiv Detail & Related papers (2021-05-27T18:46:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.