Estimating the Contamination Factor's Distribution in Unsupervised
Anomaly Detection
- URL: http://arxiv.org/abs/2210.10487v2
- Date: Tue, 17 Oct 2023 20:35:13 GMT
- Title: Estimating the Contamination Factor's Distribution in Unsupervised
Anomaly Detection
- Authors: Lorenzo Perini, Paul Buerkner and Arto Klami
- Abstract summary: Anomaly detection methods identify examples that do not follow the expected behaviour.
The proportion of examples marked as anomalies equals the expected proportion of anomalies, called contamination factor.
We introduce a method for estimating the posterior distribution of the contamination factor of a given unlabeled dataset.
- Score: 7.174572371800215
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Anomaly detection methods identify examples that do not follow the expected
behaviour, typically in an unsupervised fashion, by assigning real-valued
anomaly scores to the examples based on various heuristics. These scores need
to be transformed into actual predictions by thresholding, so that the
proportion of examples marked as anomalies equals the expected proportion of
anomalies, called contamination factor. Unfortunately, there are no good
methods for estimating the contamination factor itself. We address this need
from a Bayesian perspective, introducing a method for estimating the posterior
distribution of the contamination factor of a given unlabeled dataset. We
leverage on outputs of several anomaly detectors as a representation that
already captures the basic notion of anomalousness and estimate the
contamination using a specific mixture formulation. Empirically on 22 datasets,
we show that the estimated distribution is well-calibrated and that setting the
threshold using the posterior mean improves the anomaly detectors' performance
over several alternative methods. All code is publicly available for full
reproducibility.
Related papers
- Label Shift Estimators for Non-Ignorable Missing Data [2.605549784939959]
We consider the problem of estimating the mean of a random variable Y subject to non-ignorable missingness, i.e., where the missingness mechanism depends on Y.
We use our approach to estimate disease prevalence using a large health survey, comparing ignorable and non-ignorable approaches.
arXiv Detail & Related papers (2023-10-27T16:50:13Z) - An Iterative Method for Unsupervised Robust Anomaly Detection Under Data
Contamination [24.74938110451834]
Most deep anomaly detection models are based on learning normality from datasets.
In practice, the normality assumption is often violated due to the nature of real data distributions.
We propose a learning framework to reduce this gap and achieve better normality representation.
arXiv Detail & Related papers (2023-09-18T02:36:19Z) - On Tail Decay Rate Estimation of Loss Function Distributions [5.33024001730262]
We develop a novel theory for estimating the tails of marginal distributions.
We show that under some regularity conditions, the shape parameter of the marginal distribution is the maximum tail shape parameter of the family of conditional distributions.
arXiv Detail & Related papers (2023-06-05T11:58:25Z) - Augment to Detect Anomalies with Continuous Labelling [10.646747658653785]
Anomaly detection is to recognize samples that differ in some respect from the training observations.
Recent state-of-the-art deep learning-based anomaly detection methods suffer from high computational cost, complexity, unstable training procedures, and non-trivial implementation.
We leverage a simple learning procedure that trains a lightweight convolutional neural network, reaching state-of-the-art performance in anomaly detection.
arXiv Detail & Related papers (2022-07-03T20:11:51Z) - Explainable Deep Few-shot Anomaly Detection with Deviation Networks [123.46611927225963]
We introduce a novel weakly-supervised anomaly detection framework to train detection models.
The proposed approach learns discriminative normality by leveraging the labeled anomalies and a prior probability.
Our model is substantially more sample-efficient and robust, and performs significantly better than state-of-the-art competing methods in both closed-set and open-set settings.
arXiv Detail & Related papers (2021-08-01T14:33:17Z) - The Hidden Uncertainty in a Neural Networks Activations [105.4223982696279]
The distribution of a neural network's latent representations has been successfully used to detect out-of-distribution (OOD) data.
This work investigates whether this distribution correlates with a model's epistemic uncertainty, thus indicating its ability to generalise to novel inputs.
arXiv Detail & Related papers (2020-12-05T17:30:35Z) - Tracking disease outbreaks from sparse data with Bayesian inference [55.82986443159948]
The COVID-19 pandemic provides new motivation for estimating the empirical rate of transmission during an outbreak.
Standard methods struggle to accommodate the partial observability and sparse data common at finer scales.
We propose a Bayesian framework which accommodates partial observability in a principled manner.
arXiv Detail & Related papers (2020-09-12T20:37:33Z) - Estimation of Classification Rules from Partially Classified Data [0.9137554315375919]
We consider the situation where the observed sample contains some observations whose class of origin is known, and where the remaining observations in the sample are unclassified.
For class-conditional distributions taken to be known up to a vector of unknown parameters, the aim is to estimate the Bayes' rule of allocation for the allocation of subsequent unclassified observations.
arXiv Detail & Related papers (2020-04-13T23:35:25Z) - Batch Stationary Distribution Estimation [98.18201132095066]
We consider the problem of approximating the stationary distribution of an ergodic Markov chain given a set of sampled transitions.
We propose a consistent estimator that is based on recovering a correction ratio function over the given data.
arXiv Detail & Related papers (2020-03-02T09:10:01Z) - Estimating Gradients for Discrete Random Variables by Sampling without
Replacement [93.09326095997336]
We derive an unbiased estimator for expectations over discrete random variables based on sampling without replacement.
We show that our estimator can be derived as the Rao-Blackwellization of three different estimators.
arXiv Detail & Related papers (2020-02-14T14:15:18Z) - Learning to Predict Error for MRI Reconstruction [67.76632988696943]
We demonstrate that predictive uncertainty estimated by the current methods does not highly correlate with prediction error.
We propose a novel method that estimates the target labels and magnitude of the prediction error in two steps.
arXiv Detail & Related papers (2020-02-13T15:55:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.