Why do Angular Margin Losses work well for Semi-Supervised Anomalous
Sound Detection?
- URL: http://arxiv.org/abs/2309.15643v2
- Date: Fri, 24 Nov 2023 06:20:25 GMT
- Title: Why do Angular Margin Losses work well for Semi-Supervised Anomalous
Sound Detection?
- Authors: Kevin Wilkinghoff and Frank Kurth
- Abstract summary: State-of-the-art anomalous sound detection systems often utilize angular margin losses to learn suitable representations of acoustic data.
The goal of this work is to investigate why using angular margin losses with auxiliary tasks works well for detecting anomalous sounds.
- Score: 0.8702432681310399
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: State-of-the-art anomalous sound detection systems often utilize angular
margin losses to learn suitable representations of acoustic data using an
auxiliary task, which usually is a supervised or self-supervised classification
task. The underlying idea is that, in order to solve this auxiliary task,
specific information about normal data needs to be captured in the learned
representations and that this information is also sufficient to differentiate
between normal and anomalous samples. Especially in noisy conditions,
discriminative models based on angular margin losses tend to significantly
outperform systems based on generative or one-class models. The goal of this
work is to investigate why using angular margin losses with auxiliary tasks
works well for detecting anomalous sounds. To this end, it is shown, both
theoretically and experimentally, that minimizing angular margin losses also
minimizes compactness loss while inherently preventing learning trivial
solutions. Furthermore, multiple experiments are conducted to show that using a
related classification task as an auxiliary task teaches the model to learn
representations suitable for detecting anomalous sounds in noisy conditions.
Among these experiments are performance evaluations, visualizing the embedding
space with t-SNE and visualizing the input representations with respect to the
anomaly score using randomized input sampling for explanation.
Related papers
- Improving a Named Entity Recognizer Trained on Noisy Data with a Few
Clean Instances [55.37242480995541]
We propose to denoise noisy NER data with guidance from a small set of clean instances.
Along with the main NER model we train a discriminator model and use its outputs to recalibrate the sample weights.
Results on public crowdsourcing and distant supervision datasets show that the proposed method can consistently improve performance with a small guidance set.
arXiv Detail & Related papers (2023-10-25T17:23:37Z) - Noisy-ArcMix: Additive Noisy Angular Margin Loss Combined With Mixup
Anomalous Sound Detection [5.1308092683559225]
Unsupervised anomalous sound detection (ASD) aims to identify anomalous sounds by learning the features of normal operational sounds and sensing their deviations.
Recent approaches have focused on the self-supervised task utilizing the classification of normal data, and advanced models have shown that securing representation space for anomalous data is important.
We propose a training technique aimed at ensuring intra-class compactness and increasing the angle gap between normal and abnormal samples.
arXiv Detail & Related papers (2023-10-10T07:04:36Z) - Improving the Robustness of Summarization Models by Detecting and
Removing Input Noise [50.27105057899601]
We present a large empirical study quantifying the sometimes severe loss in performance from different types of input noise for a range of datasets and model sizes.
We propose a light-weight method for detecting and removing such noise in the input during model inference without requiring any training, auxiliary models, or even prior knowledge of the type of noise.
arXiv Detail & Related papers (2022-12-20T00:33:11Z) - Framing Algorithmic Recourse for Anomaly Detection [18.347886926848563]
We present an approach -- Context preserving Algorithmic Recourse for Anomalies in Tabular data (CARAT)
CARAT uses a transformer based encoder-decoder model to explain an anomaly by finding features with low likelihood.
Semantically coherent counterfactuals are generated by modifying the highlighted features, using the overall context of features in the anomalous instance(s)
arXiv Detail & Related papers (2022-06-29T03:30:51Z) - SLA$^2$P: Self-supervised Anomaly Detection with Adversarial
Perturbation [77.71161225100927]
Anomaly detection is a fundamental yet challenging problem in machine learning.
We propose a novel and powerful framework, dubbed as SLA$2$P, for unsupervised anomaly detection.
arXiv Detail & Related papers (2021-11-25T03:53:43Z) - Explainable Deep Few-shot Anomaly Detection with Deviation Networks [123.46611927225963]
We introduce a novel weakly-supervised anomaly detection framework to train detection models.
The proposed approach learns discriminative normality by leveraging the labeled anomalies and a prior probability.
Our model is substantially more sample-efficient and robust, and performs significantly better than state-of-the-art competing methods in both closed-set and open-set settings.
arXiv Detail & Related papers (2021-08-01T14:33:17Z) - Canonical Polyadic Decomposition and Deep Learning for Machine Fault
Detection [0.0]
It is impossible to collect enough data to learn all types of faults from a machine.
New algorithms, trained using data from healthy conditions only, were developed to perform unsupervised anomaly detection.
A key issue in the development of these algorithms is the noise in the signals, as it impacts the anomaly detection performance.
arXiv Detail & Related papers (2021-07-20T14:06:50Z) - Anomalous Sound Detection Using a Binary Classification Model and Class
Centroids [47.856367556856554]
We propose a binary classification model that is developed by using not only normal data but also outlier data in the other domains as pseudo-anomalous sound data.
We also investigate the effectiveness of additionally using anomalous sound data for further improving the binary classification model.
arXiv Detail & Related papers (2021-06-11T03:35:06Z) - Self-Attentive Classification-Based Anomaly Detection in Unstructured
Logs [59.04636530383049]
We propose Logsy, a classification-based method to learn log representations.
We show an average improvement of 0.25 in the F1 score, compared to the previous methods.
arXiv Detail & Related papers (2020-08-21T07:26:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.