TracInAD: Measuring Influence for Anomaly Detection
- URL: http://arxiv.org/abs/2205.01362v4
- Date: Tue, 30 Jan 2024 13:08:40 GMT
- Title: TracInAD: Measuring Influence for Anomaly Detection
- Authors: Hugo Thimonier, Fabrice Popineau, Arpad Rimmel, Bich-Li\^en Doan and
Fabrice Daniel
- Abstract summary: This paper proposes a novel methodology to flag anomalies based on TracIn.
We test our approach using Variational Autoencoders and show that the average influence of a subsample of training points on a test point can serve as a proxy for abnormality.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As with many other tasks, neural networks prove very effective for anomaly
detection purposes. However, very few deep-learning models are suited for
detecting anomalies on tabular datasets. This paper proposes a novel
methodology to flag anomalies based on TracIn, an influence measure initially
introduced for explicability purposes. The proposed methods can serve to
augment any unsupervised deep anomaly detection method. We test our approach
using Variational Autoencoders and show that the average influence of a
subsample of training points on a test point can serve as a proxy for
abnormality. Our model proves to be competitive in comparison with
state-of-the-art approaches: it achieves comparable or better performance in
terms of detection accuracy on medical and cyber-security tabular benchmark
data.
Related papers
- MeLIAD: Interpretable Few-Shot Anomaly Detection with Metric Learning and Entropy-based Scoring [2.394081903745099]
We propose MeLIAD, a novel methodology for interpretable anomaly detection.
MeLIAD is based on metric learning and achieves interpretability by design without relying on any prior distribution assumptions of true anomalies.
Experiments on five public benchmark datasets, including quantitative and qualitative evaluation of interpretability, demonstrate that MeLIAD achieves improved anomaly detection and localization performance.
arXiv Detail & Related papers (2024-09-20T16:01:43Z) - Unlearnable Examples Detection via Iterative Filtering [84.59070204221366]
Deep neural networks are proven to be vulnerable to data poisoning attacks.
It is quite beneficial and challenging to detect poisoned samples from a mixed dataset.
We propose an Iterative Filtering approach for UEs identification.
arXiv Detail & Related papers (2024-08-15T13:26:13Z) - Online-Adaptive Anomaly Detection for Defect Identification in Aircraft Assembly [4.387337528923525]
Anomaly detection deals with detecting deviations from established patterns within data.
We propose a novel framework for online-adaptive anomaly detection using transfer learning.
Experimental results showcase a detection accuracy exceeding 0.975, outperforming the state-of-the-art ET-NET approach.
arXiv Detail & Related papers (2024-06-18T15:11:44Z) - CL-Flow:Strengthening the Normalizing Flows by Contrastive Learning for
Better Anomaly Detection [1.951082473090397]
We propose a self-supervised anomaly detection approach that combines contrastive learning with 2D-Flow.
Compared to mainstream unsupervised approaches, our self-supervised method demonstrates superior detection accuracy, fewer additional model parameters, and faster inference speed.
Our approach showcases new state-of-the-art results, achieving a performance of 99.6% in image-level AUROC on the MVTecAD dataset and 96.8% in image-level AUROC on the BTAD dataset.
arXiv Detail & Related papers (2023-11-12T10:07:03Z) - Active anomaly detection based on deep one-class classification [9.904380236739398]
We tackle two essential problems of active learning for Deep SVDD: query strategy and semi-supervised learning method.
First, rather than solely identifying anomalies, our query strategy selects uncertain samples according to an adaptive boundary.
Second, we apply noise contrastive estimation in training a one-class classification model to incorporate both labeled normal and abnormal data effectively.
arXiv Detail & Related papers (2023-09-18T03:56:45Z) - On the Universal Adversarial Perturbations for Efficient Data-free
Adversarial Detection [55.73320979733527]
We propose a data-agnostic adversarial detection framework, which induces different responses between normal and adversarial samples to UAPs.
Experimental results show that our method achieves competitive detection performance on various text classification tasks.
arXiv Detail & Related papers (2023-06-27T02:54:07Z) - Fake It Till You Make It: Near-Distribution Novelty Detection by
Score-Based Generative Models [54.182955830194445]
existing models either fail or face a dramatic drop under the so-called near-distribution" setting.
We propose to exploit a score-based generative model to produce synthetic near-distribution anomalous data.
Our method improves the near-distribution novelty detection by 6% and passes the state-of-the-art by 1% to 5% across nine novelty detection benchmarks.
arXiv Detail & Related papers (2022-05-28T02:02:53Z) - Explainable Deep Few-shot Anomaly Detection with Deviation Networks [123.46611927225963]
We introduce a novel weakly-supervised anomaly detection framework to train detection models.
The proposed approach learns discriminative normality by leveraging the labeled anomalies and a prior probability.
Our model is substantially more sample-efficient and robust, and performs significantly better than state-of-the-art competing methods in both closed-set and open-set settings.
arXiv Detail & Related papers (2021-08-01T14:33:17Z) - Meta-learning One-class Classifiers with Eigenvalue Solvers for
Supervised Anomaly Detection [55.888835686183995]
We propose a neural network-based meta-learning method for supervised anomaly detection.
We experimentally demonstrate that the proposed method achieves better performance than existing anomaly detection and few-shot learning methods.
arXiv Detail & Related papers (2021-03-01T01:43:04Z) - Uncertainty Estimation Using a Single Deep Deterministic Neural Network [66.26231423824089]
We propose a method for training a deterministic deep model that can find and reject out of distribution data points at test time with a single forward pass.
We scale training in these with a novel loss function and centroid updating scheme and match the accuracy of softmax models.
arXiv Detail & Related papers (2020-03-04T12:27:36Z) - Regularized Cycle Consistent Generative Adversarial Network for Anomaly
Detection [5.457279006229213]
We propose a new Regularized Cycle Consistent Generative Adversarial Network (RCGAN) in which deep neural networks are adversarially trained to better recognize anomalous samples.
Experimental results on both real-world and synthetic data show that our model leads to significant and consistent improvements on previous anomaly detection benchmarks.
arXiv Detail & Related papers (2020-01-18T03:35:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.