Mean-Shifted Contrastive Loss for Anomaly Detection
- URL: http://arxiv.org/abs/2106.03844v1
- Date: Mon, 7 Jun 2021 17:58:03 GMT
- Title: Mean-Shifted Contrastive Loss for Anomaly Detection
- Authors: Tal Reiss, Yedid Hoshen
- Abstract summary: We propose a new loss function which can overcome failure modes of both center-loss and contrastive-loss methods.
Our improvements yield a new anomaly detection approach, based on $textitMean-Shifted Contrastive Loss$.
Our method achieves state-of-the-art anomaly detection performance on multiple benchmarks including $97.5%$ ROC-AUC.
- Score: 34.97652735163338
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep anomaly detection methods learn representations that separate between
normal and anomalous samples. Very effective representations are obtained when
powerful externally trained feature extractors (e.g. ResNets pre-trained on
ImageNet) are fine-tuned on the training data which consists of normal samples
and no anomalies. However, this is a difficult task that can suffer from
catastrophic collapse, i.e. it is prone to learning trivial and non-specific
features. In this paper, we propose a new loss function which can overcome
failure modes of both center-loss and contrastive-loss methods. Furthermore, we
combine it with a confidence-invariant angular center loss, which replaces the
Euclidean distance used in previous work, that was sensitive to prediction
confidence. Our improvements yield a new anomaly detection approach, based on
$\textit{Mean-Shifted Contrastive Loss}$, which is both more accurate and less
sensitive to catastrophic collapse than previous methods. Our method achieves
state-of-the-art anomaly detection performance on multiple benchmarks including
$97.5\%$ ROC-AUC on the CIFAR-10 dataset.
Related papers
- CL-Flow:Strengthening the Normalizing Flows by Contrastive Learning for
Better Anomaly Detection [1.951082473090397]
We propose a self-supervised anomaly detection approach that combines contrastive learning with 2D-Flow.
Compared to mainstream unsupervised approaches, our self-supervised method demonstrates superior detection accuracy, fewer additional model parameters, and faster inference speed.
Our approach showcases new state-of-the-art results, achieving a performance of 99.6% in image-level AUROC on the MVTecAD dataset and 96.8% in image-level AUROC on the BTAD dataset.
arXiv Detail & Related papers (2023-11-12T10:07:03Z) - Don't Miss Out on Novelty: Importance of Novel Features for Deep Anomaly
Detection [64.21963650519312]
Anomaly Detection (AD) is a critical task that involves identifying observations that do not conform to a learned model of normality.
We propose a novel approach to AD using explainability to capture such novel features as unexplained observations in the input space.
Our approach establishes a new state-of-the-art across multiple benchmarks, handling diverse anomaly types.
arXiv Detail & Related papers (2023-10-01T21:24:05Z) - Lossy Compression for Robust Unsupervised Time-Series Anomaly Detection [4.873362301533825]
We propose a Lossy Causal Temporal Convolutional Neural Network Autoencoder for anomaly detection.
Our framework uses a rate-distortion loss and an entropy bottleneck to learn a compressed latent representation for the task.
arXiv Detail & Related papers (2022-12-05T14:29:16Z) - Self-Supervised Losses for One-Class Textual Anomaly Detection [6.649715954440713]
Current deep learning methods for anomaly detection in text rely on supervisory signals in inliers that are difficult to tune.
We study a simpler alternative: fine-tuning Transformers on the inlier data with self-supervised objectives and using the losses as an anomaly score.
Overall, the self-supervision approach outperforms other methods under various anomaly detection scenarios.
arXiv Detail & Related papers (2022-04-12T10:42:47Z) - Scale-Equivalent Distillation for Semi-Supervised Object Detection [57.59525453301374]
Recent Semi-Supervised Object Detection (SS-OD) methods are mainly based on self-training, generating hard pseudo-labels by a teacher model on unlabeled data as supervisory signals.
We analyze the challenges these methods meet with the empirical experiment results.
We introduce a novel approach, Scale-Equivalent Distillation (SED), which is a simple yet effective end-to-end knowledge distillation framework robust to large object size variance and class imbalance.
arXiv Detail & Related papers (2022-03-23T07:33:37Z) - Simple Adaptive Projection with Pretrained Features for Anomaly
Detection [0.0]
We propose a novel adaptation framework including simple linear transformation and self-attention.
Our simple adaptive projection with pretrained features(SAP2) yields a novel anomaly detection criterion.
arXiv Detail & Related papers (2021-12-05T15:29:59Z) - SLA$^2$P: Self-supervised Anomaly Detection with Adversarial
Perturbation [77.71161225100927]
Anomaly detection is a fundamental yet challenging problem in machine learning.
We propose a novel and powerful framework, dubbed as SLA$2$P, for unsupervised anomaly detection.
arXiv Detail & Related papers (2021-11-25T03:53:43Z) - Tightening the Approximation Error of Adversarial Risk with Auto Loss
Function Search [12.263913626161155]
A common type of evaluation is to approximate the adversarial risk of a model as a robustness indicator.
We propose AutoLoss-AR, the first method for searching loss functions for tightening the error.
The results demonstrate the effectiveness of the proposed methods.
arXiv Detail & Related papers (2021-11-09T11:47:43Z) - Explainable Deep Few-shot Anomaly Detection with Deviation Networks [123.46611927225963]
We introduce a novel weakly-supervised anomaly detection framework to train detection models.
The proposed approach learns discriminative normality by leveraging the labeled anomalies and a prior probability.
Our model is substantially more sample-efficient and robust, and performs significantly better than state-of-the-art competing methods in both closed-set and open-set settings.
arXiv Detail & Related papers (2021-08-01T14:33:17Z) - Continual Learning for Fake Audio Detection [62.54860236190694]
This paper proposes detecting fake without forgetting, a continual-learning-based method, to make the model learn new spoofing attacks incrementally.
Experiments are conducted on the ASVspoof 2019 dataset.
arXiv Detail & Related papers (2021-04-15T07:57:05Z) - TadGAN: Time Series Anomaly Detection Using Generative Adversarial
Networks [73.01104041298031]
TadGAN is an unsupervised anomaly detection approach built on Generative Adversarial Networks (GANs)
To capture the temporal correlations of time series, we use LSTM Recurrent Neural Networks as base models for Generators and Critics.
To demonstrate the performance and generalizability of our approach, we test several anomaly scoring techniques and report the best-suited one.
arXiv Detail & Related papers (2020-09-16T15:52:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.