An Open-set Recognition and Few-Shot Learning Dataset for Audio Event
Classification in Domestic Environments
- URL: http://arxiv.org/abs/2002.11561v8
- Date: Mon, 11 Apr 2022 08:32:48 GMT
- Title: An Open-set Recognition and Few-Shot Learning Dataset for Audio Event
Classification in Domestic Environments
- Authors: Javier Naranjo-Alcazar, Sergi Perez-Castanos, Pedro Zuccarrello, Ana
M. Torres, Jose J. Lopez, Franscesc J. Ferri and Maximo Cobos
- Abstract summary: This paper deals with the application of few-shot learning to the detection of specific and intentional acoustic events given by different types of sound alarms.
The detection of such alarms in a practical scenario can be considered an open-set recognition (OSR) problem.
This paper is aimed at poviding the audio recognition community with a carefully annotated dataset comprised of 1360 clips from 34 classes divided into pattern sounds and unwanted sounds.
- Score: 3.697508383732901
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The problem of training with a small set of positive samples is known as
few-shot learning (FSL). It is widely known that traditional deep learning (DL)
algorithms usually show very good performance when trained with large datasets.
However, in many applications, it is not possible to obtain such a high number
of samples. In the image domain, typical FSL applications include those related
to face recognition. In the audio domain, music fraud or speaker recognition
can be clearly benefited from FSL methods. This paper deals with the
application of FSL to the detection of specific and intentional acoustic events
given by different types of sound alarms, such as door bells or fire alarms,
using a limited number of samples. These sounds typically occur in domestic
environments where many events corresponding to a wide variety of sound classes
take place. Therefore, the detection of such alarms in a practical scenario can
be considered an open-set recognition (OSR) problem. To address the lack of a
dedicated public dataset for audio FSL, researchers usually make modifications
on other available datasets. This paper is aimed at poviding the audio
recognition community with a carefully annotated dataset
(https://zenodo.org/record/3689288) for FSL in an OSR context comprised of 1360
clips from 34 classes divided into pattern sounds} and unwanted sounds. To
facilitate and promote research on this area, results with state-of-the-art
baseline systems based on transfer learning are also presented.
Related papers
- Exploring Federated Self-Supervised Learning for General Purpose Audio
Understanding [14.468870364990291]
We propose a novel Federated SSL (F-SSL) framework, dubbed FASSL, that enables learning intermediate feature representations from large-scale decentralized heterogeneous clients.
Our study has found that audio F-SSL approaches perform on par with the centralized audio-SSL approaches on the audio-retrieval task.
arXiv Detail & Related papers (2024-02-05T10:57:48Z) - Combating Label Noise With A General Surrogate Model For Sample
Selection [84.61367781175984]
We propose to leverage the vision-language surrogate model CLIP to filter noisy samples automatically.
We validate the effectiveness of our proposed method on both real-world and synthetic noisy datasets.
arXiv Detail & Related papers (2023-10-16T14:43:27Z) - Text-to-feature diffusion for audio-visual few-shot learning [59.45164042078649]
Few-shot learning from video data is a challenging and underexplored, yet much cheaper, setup.
We introduce a unified audio-visual few-shot video classification benchmark on three datasets.
We show that AV-DIFF obtains state-of-the-art performance on our proposed benchmark for audio-visual few-shot learning.
arXiv Detail & Related papers (2023-09-07T17:30:36Z) - SLICER: Learning universal audio representations using low-resource
self-supervised pre-training [53.06337011259031]
We present a new Self-Supervised Learning approach to pre-train encoders on unlabeled audio data.
Our primary aim is to learn audio representations that can generalize across a large variety of speech and non-speech tasks.
arXiv Detail & Related papers (2022-11-02T23:45:33Z) - Cross-domain Voice Activity Detection with Self-Supervised
Representations [9.02236667251654]
Voice Activity Detection (VAD) aims at detecting speech segments on an audio signal.
Current state-of-the-art methods focus on training a neural network exploiting features directly contained in the acoustics.
We show that representations based on Self-Supervised Learning (SSL) can adapt well to different domains.
arXiv Detail & Related papers (2022-09-22T14:53:44Z) - OpenLDN: Learning to Discover Novel Classes for Open-World
Semi-Supervised Learning [110.40285771431687]
Semi-supervised learning (SSL) is one of the dominant approaches to address the annotation bottleneck of supervised learning.
Recent SSL methods can effectively leverage a large repository of unlabeled data to improve performance while relying on a small set of labeled data.
This work introduces OpenLDN that utilizes a pairwise similarity loss to discover novel classes.
arXiv Detail & Related papers (2022-07-05T18:51:05Z) - Sound and Visual Representation Learning with Multiple Pretraining Tasks [104.11800812671953]
Self-supervised tasks (SSL) reveal different features from the data.
This work aims to combine Multiple SSL tasks (Multi-SSL) that generalizes well for all downstream tasks.
Experiments on sound representations demonstrate that Multi-SSL via incremental learning (IL) of SSL tasks outperforms single SSL task models.
arXiv Detail & Related papers (2022-01-04T09:09:38Z) - Building a Noisy Audio Dataset to Evaluate Machine Learning Approaches
for Automatic Speech Recognition Systems [0.0]
This work aims to present the process of building a dataset of noisy audios, in a specific case of degenerated audios due to interference.
We also present initial results of a classifier that uses such data for evaluation, indicating the benefits of using this dataset in the recognizer's training process.
arXiv Detail & Related papers (2021-10-04T13:08:53Z) - PILOT: Introducing Transformers for Probabilistic Sound Event
Localization [107.78964411642401]
This paper introduces a novel transformer-based sound event localization framework, where temporal dependencies in the received multi-channel audio signals are captured via self-attention mechanisms.
The framework is evaluated on three publicly available multi-source sound event localization datasets and compared against state-of-the-art methods in terms of localization error and event detection accuracy.
arXiv Detail & Related papers (2021-06-07T18:29:19Z) - Deep Learning Radio Frequency Signal Classification with Hybrid Images [0.0]
We focus on the different pre-processing steps that can be used on the input training data, and test the results on a fixed Deep Learning architecture.
We propose a hybrid image that takes advantage of both time and frequency domain information, and tackles the classification as a Computer Vision problem.
arXiv Detail & Related papers (2021-05-19T11:12:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.