Anomaly detection with semi-supervised classification based on risk
estimators
- URL: http://arxiv.org/abs/2309.00379v1
- Date: Fri, 1 Sep 2023 10:30:48 GMT
- Title: Anomaly detection with semi-supervised classification based on risk
estimators
- Authors: Le Thi Khanh Hien, Sukanya Patra, and Souhaib Ben Taieb
- Abstract summary: We propose two novel classification-based anomaly detection methods.
Firstly, we introduce a semi-supervised shallow anomaly detection method based on an unbiased risk estimator.
Secondly, we present a semi-supervised deep anomaly detection method utilizing a nonnegative (biased) risk estimator.
- Score: 4.519754139322585
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A significant limitation of one-class classification anomaly detection
methods is their reliance on the assumption that unlabeled training data only
contains normal instances. To overcome this impractical assumption, we propose
two novel classification-based anomaly detection methods. Firstly, we introduce
a semi-supervised shallow anomaly detection method based on an unbiased risk
estimator. Secondly, we present a semi-supervised deep anomaly detection method
utilizing a nonnegative (biased) risk estimator. We establish estimation error
bounds and excess risk bounds for both risk minimizers. Additionally, we
propose techniques to select appropriate regularization parameters that ensure
the nonnegativity of the empirical risk in the shallow model under specific
loss functions. Our extensive experiments provide strong evidence of the
effectiveness of the risk-based anomaly detection methods.
Related papers
- Data-driven decision-making under uncertainty with entropic risk measure [5.407319151576265]
The entropic risk measure is widely used in high-stakes decision making to account for tail risks associated with an uncertain loss.
To debias the empirical entropic risk estimator, we propose a strongly consistent bootstrapping procedure.
We show that cross validation methods can result in significantly higher out-of-sample risk for the insurer if the bias in validation performance is not corrected for.
arXiv Detail & Related papers (2024-09-30T04:02:52Z) - Predictive Uncertainty Quantification via Risk Decompositions for Strictly Proper Scoring Rules [7.0549244915538765]
Uncertainty in predictive modeling often relies on ad hoc methods.
This paper introduces a theoretical approach to understanding uncertainty through statistical risks.
We show how to split pointwise risk into Bayes risk and excess risk.
arXiv Detail & Related papers (2024-02-16T14:40:22Z) - One step closer to unbiased aleatoric uncertainty estimation [71.55174353766289]
We propose a new estimation method by actively de-noising the observed data.
By conducting a broad range of experiments, we demonstrate that our proposed approach provides a much closer approximation to the actual data uncertainty than the standard method.
arXiv Detail & Related papers (2023-12-16T14:59:11Z) - An Iterative Method for Unsupervised Robust Anomaly Detection Under Data
Contamination [24.74938110451834]
Most deep anomaly detection models are based on learning normality from datasets.
In practice, the normality assumption is often violated due to the nature of real data distributions.
We propose a learning framework to reduce this gap and achieve better normality representation.
arXiv Detail & Related papers (2023-09-18T02:36:19Z) - AGAD: Adversarial Generative Anomaly Detection [12.68966318231776]
Anomaly detection suffered from the lack of anomalies due to the diversity of abnormalities and the difficulties of obtaining large-scale anomaly data.
We propose Adversarial Generative Anomaly Detection (AGAD), a self-contrast-based anomaly detection paradigm.
Our method generates pseudo-anomaly data for both supervised and semi-supervised anomaly detection scenarios.
arXiv Detail & Related papers (2023-04-09T10:40:02Z) - Estimating the Contamination Factor's Distribution in Unsupervised
Anomaly Detection [7.174572371800215]
Anomaly detection methods identify examples that do not follow the expected behaviour.
The proportion of examples marked as anomalies equals the expected proportion of anomalies, called contamination factor.
We introduce a method for estimating the posterior distribution of the contamination factor of a given unlabeled dataset.
arXiv Detail & Related papers (2022-10-19T11:51:25Z) - Hierarchical Semi-Supervised Contrastive Learning for
Contamination-Resistant Anomaly Detection [81.07346419422605]
Anomaly detection aims at identifying deviant samples from the normal data distribution.
Contrastive learning has provided a successful way to sample representation that enables effective discrimination on anomalies.
We propose a novel hierarchical semi-supervised contrastive learning framework, for contamination-resistant anomaly detection.
arXiv Detail & Related papers (2022-07-24T18:49:26Z) - Mitigating multiple descents: A model-agnostic framework for risk
monotonization [84.6382406922369]
We develop a general framework for risk monotonization based on cross-validation.
We propose two data-driven methodologies, namely zero- and one-step, that are akin to bagging and boosting.
arXiv Detail & Related papers (2022-05-25T17:41:40Z) - Risk Consistent Multi-Class Learning from Label Proportions [64.0125322353281]
This study addresses a multiclass learning from label proportions (MCLLP) setting in which training instances are provided in bags.
Most existing MCLLP methods impose bag-wise constraints on the prediction of instances or assign them pseudo-labels.
A risk-consistent method is proposed for instance classification using the empirical risk minimization framework.
arXiv Detail & Related papers (2022-03-24T03:49:04Z) - Information-Theoretic Bias Reduction via Causal View of Spurious
Correlation [71.9123886505321]
We propose an information-theoretic bias measurement technique through a causal interpretation of spurious correlation.
We present a novel debiasing framework against the algorithmic bias, which incorporates a bias regularization loss.
The proposed bias measurement and debiasing approaches are validated in diverse realistic scenarios.
arXiv Detail & Related papers (2022-01-10T01:19:31Z) - Explainable Deep Few-shot Anomaly Detection with Deviation Networks [123.46611927225963]
We introduce a novel weakly-supervised anomaly detection framework to train detection models.
The proposed approach learns discriminative normality by leveraging the labeled anomalies and a prior probability.
Our model is substantially more sample-efficient and robust, and performs significantly better than state-of-the-art competing methods in both closed-set and open-set settings.
arXiv Detail & Related papers (2021-08-01T14:33:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.