Self-Supervised Learning for Anomalous Sound Detection
- URL: http://arxiv.org/abs/2312.09578v1
- Date: Fri, 15 Dec 2023 07:16:12 GMT
- Title: Self-Supervised Learning for Anomalous Sound Detection
- Authors: Kevin Wilkinghoff
- Abstract summary: State-of-the-art anomalous sound detection (ASD) systems are often trained by using an auxiliary classification task to learn an embedding space.
A new state-of-the-art performance for the DCASE2023 ASD dataset is obtained that outperforms all other published results on this dataset by a large margin.
- Score: 0.43512163406551996
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: State-of-the-art anomalous sound detection (ASD) systems are often trained by
using an auxiliary classification task to learn an embedding space. Doing so
enables the system to learn embeddings that are robust to noise and are
ignoring non-target sound events but requires manually annotated meta
information to be used as class labels. However, the less difficult the
classification task becomes, the less informative are the embeddings and the
worse is the resulting ASD performance. A solution to this problem is to
utilize self-supervised learning (SSL). In this work, feature exchange
(FeatEx), a simple yet effective SSL approach for ASD, is proposed. In
addition, FeatEx is compared to and combined with existing SSL approaches. As
the main result, a new state-of-the-art performance for the DCASE2023 ASD
dataset is obtained that outperforms all other published results on this
dataset by a large margin.
Related papers
- DIDA: Denoised Imitation Learning based on Domain Adaptation [28.36684781402964]
We focus on the problem of Learning from Noisy Demonstrations (LND), where the imitator is required to learn from data with noise.
We propose Denoised Imitation learning based on Domain Adaptation (DIDA), which designs two discriminators to distinguish the noise level and expertise level of data.
Experiment results on MuJoCo demonstrate that DIDA can successfully handle challenging imitation tasks from demonstrations with various types of noise, outperforming most baseline methods.
arXiv Detail & Related papers (2024-04-04T11:29:05Z) - Improving a Named Entity Recognizer Trained on Noisy Data with a Few
Clean Instances [55.37242480995541]
We propose to denoise noisy NER data with guidance from a small set of clean instances.
Along with the main NER model we train a discriminator model and use its outputs to recalibrate the sample weights.
Results on public crowdsourcing and distant supervision datasets show that the proposed method can consistently improve performance with a small guidance set.
arXiv Detail & Related papers (2023-10-25T17:23:37Z) - Improving Open-Set Semi-Supervised Learning with Self-Supervision [13.944469874692459]
Open-set semi-supervised learning (OSSL) embodies a practical scenario within semi-supervised learning.
We propose an OSSL framework that facilitates learning from all unlabeled data through self-supervision.
Our method yields state-of-the-art results on many of the evaluated benchmark problems.
arXiv Detail & Related papers (2023-01-24T16:46:37Z) - Representation Learning for the Automatic Indexing of Sound Effects
Libraries [79.68916470119743]
We show that a task-specific but dataset-independent representation can successfully address data issues such as class imbalance, inconsistent class labels, and insufficient dataset size.
Detailed experimental results show the impact of metric learning approaches and different cross-dataset training methods on representational effectiveness.
arXiv Detail & Related papers (2022-08-18T23:46:13Z) - OpenLDN: Learning to Discover Novel Classes for Open-World
Semi-Supervised Learning [110.40285771431687]
Semi-supervised learning (SSL) is one of the dominant approaches to address the annotation bottleneck of supervised learning.
Recent SSL methods can effectively leverage a large repository of unlabeled data to improve performance while relying on a small set of labeled data.
This work introduces OpenLDN that utilizes a pairwise similarity loss to discover novel classes.
arXiv Detail & Related papers (2022-07-05T18:51:05Z) - Augmented Contrastive Self-Supervised Learning for Audio Invariant
Representations [28.511060004984895]
We propose an augmented contrastive SSL framework to learn invariant representations from unlabeled data.
Our method applies various perturbations to the unlabeled input data and utilizes contrastive learning to learn representations robust to such perturbations.
arXiv Detail & Related papers (2021-12-21T02:50:53Z) - Can semi-supervised learning reduce the amount of manual labelling
required for effective radio galaxy morphology classification? [0.0]
We test whether SSL can achieve performance comparable to the current supervised state of the art when using many fewer labelled data points.
We find that although SSL provides additional regularisation, its performance degrades rapidly when using very few labels.
arXiv Detail & Related papers (2021-11-08T09:36:48Z) - Trash to Treasure: Harvesting OOD Data with Cross-Modal Matching for
Open-Set Semi-Supervised Learning [101.28281124670647]
Open-set semi-supervised learning (open-set SSL) investigates a challenging but practical scenario where out-of-distribution (OOD) samples are contained in the unlabeled data.
We propose a novel training mechanism that could effectively exploit the presence of OOD data for enhanced feature learning.
Our approach substantially lifts the performance on open-set SSL and outperforms the state-of-the-art by a large margin.
arXiv Detail & Related papers (2021-08-12T09:14:44Z) - OpenMatch: Open-set Consistency Regularization for Semi-supervised
Learning with Outliers [71.08167292329028]
We propose a novel Open-set Semi-Supervised Learning (OSSL) approach called OpenMatch.
OpenMatch unifies FixMatch with novelty detection based on one-vs-all (OVA) classifiers.
It achieves state-of-the-art performance on three datasets, and even outperforms a fully supervised model in detecting outliers unseen in unlabeled data on CIFAR10.
arXiv Detail & Related papers (2021-05-28T23:57:15Z) - Learning with Out-of-Distribution Data for Audio Classification [60.48251022280506]
We show that detecting and relabelling certain OOD instances, rather than discarding them, can have a positive effect on learning.
The proposed method is shown to improve the performance of convolutional neural networks by a significant margin.
arXiv Detail & Related papers (2020-02-11T21:08:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.