Diminishing Empirical Risk Minimization for Unsupervised Anomaly
Detection
- URL: http://arxiv.org/abs/2205.14676v1
- Date: Sun, 29 May 2022 14:18:26 GMT
- Title: Diminishing Empirical Risk Minimization for Unsupervised Anomaly
Detection
- Authors: Shaoshen Wang (1), Yanbin Liu (2), Ling Chen (1), Chengqi Zhang (1)
((1) Australian Artificial Intelligence Institute, University of Technology
Sydney, Sydney, Australia, (2) Centre for Medical Research, The University of
Western Australia, Perth, Australia)
- Abstract summary: Empirical Risk Minimization (ERM) assumes that the performance of an algorithm on an unknown distribution can be approximated by averaging losses on the known training set.
We propose a novel Diminishing Empirical Risk Minimization (DERM) framework to break through the limitations of ERM.
DERM adaptively adjusts the impact of individual losses through a well-devised aggregation strategy.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unsupervised anomaly detection (AD) is a challenging task in realistic
applications. Recently, there is an increasing trend to detect anomalies with
deep neural networks (DNN). However, most popular deep AD detectors cannot
protect the network from learning contaminated information brought by anomalous
data, resulting in unsatisfactory detection performance and overfitting issues.
In this work, we identify one reason that hinders most existing DNN-based
anomaly detection methods from performing is the wide adoption of the Empirical
Risk Minimization (ERM). ERM assumes that the performance of an algorithm on an
unknown distribution can be approximated by averaging losses on the known
training set. This averaging scheme thus ignores the distinctions between
normal and anomalous instances. To break through the limitations of ERM, we
propose a novel Diminishing Empirical Risk Minimization (DERM) framework.
Specifically, DERM adaptively adjusts the impact of individual losses through a
well-devised aggregation strategy. Theoretically, our proposed DERM can
directly modify the gradient contribution of each individual loss in the
optimization process to suppress the influence of outliers, leading to a robust
anomaly detector. Empirically, DERM outperformed the state-of-the-art on the
unsupervised AD benchmark consisting of 18 datasets.
Related papers
- Feature Attenuation of Defective Representation Can Resolve Incomplete Masking on Anomaly Detection [1.0358639819750703]
In unsupervised anomaly detection (UAD) research, it is necessary to develop a computationally efficient and scalable solution.
We revisit the reconstruction-by-inpainting approach and rethink to improve it by analyzing strengths and weaknesses.
We propose Feature Attenuation of Defective Representation (FADeR) that only employs two layers which attenuates feature information of anomaly reconstruction.
arXiv Detail & Related papers (2024-07-05T15:44:53Z) - Unraveling the "Anomaly" in Time Series Anomaly Detection: A
Self-supervised Tri-domain Solution [89.16750999704969]
Anomaly labels hinder traditional supervised models in time series anomaly detection.
Various SOTA deep learning techniques, such as self-supervised learning, have been introduced to tackle this issue.
We propose a novel self-supervised learning based Tri-domain Anomaly Detector (TriAD)
arXiv Detail & Related papers (2023-11-19T05:37:18Z) - Open-Set Graph Anomaly Detection via Normal Structure Regularisation [30.638274744518682]
Open-set Graph Anomaly Detection (GAD) aims to train a detection model using a small number of normal and anomaly nodes.
Current supervised GAD methods tend to over-emphasise fitting the seen anomalies, leading to many errors of detecting the unseen anomalies as normal nodes.
We propose a novel open-set GAD approach, namely normal structure regularisation (NSReg), to achieve generalised detection ability to unseen anomalies.
arXiv Detail & Related papers (2023-11-12T13:25:28Z) - MSFlow: Multi-Scale Flow-based Framework for Unsupervised Anomaly
Detection [124.52227588930543]
Unsupervised anomaly detection (UAD) attracts a lot of research interest and drives widespread applications.
An inconspicuous yet powerful statistics model, the normalizing flows, is appropriate for anomaly detection and localization in an unsupervised fashion.
We propose a novel Multi-Scale Flow-based framework dubbed MSFlow composed of asymmetrical parallel flows followed by a fusion flow.
Our MSFlow achieves a new state-of-the-art with a detection AUORC score of up to 99.7%, localization AUCROC score of 98.8%, and PRO score of 97.1%.
arXiv Detail & Related papers (2023-08-29T13:38:35Z) - Adaptive Thresholding Heuristic for KPI Anomaly Detection [1.57731592348751]
A plethora of outlier detectors have been explored in the time series domain, however, in a business sense, not all outliers are anomalies of interest.
This article proposes an Adaptive Thresholding Heuristic (ATH) to dynamically adjust the detection threshold based on the local properties of the data distribution and adapt to changes in time series patterns.
Experimental results show that ATH is efficient making it scalable for near real time anomaly detection and flexible with forecasters and outlier detectors.
arXiv Detail & Related papers (2023-08-21T06:45:28Z) - Anomaly Detection with Score Distribution Discrimination [4.468952886990851]
We propose to optimize the anomaly scoring function from the view of score distribution.
We design a novel loss function called Overlap loss that minimizes the overlap area between the score distributions of normal and abnormal samples.
arXiv Detail & Related papers (2023-06-26T03:32:57Z) - Time-series Anomaly Detection via Contextual Discriminative Contrastive
Learning [0.0]
One-class classification methods are commonly used for anomaly detection tasks.
We propose a novel approach inspired by the loss function of DeepSVDD.
We combine our approach with a deterministic contrastive loss from Neutral AD, a promising self-supervised learning anomaly detection approach.
arXiv Detail & Related papers (2023-04-16T21:36:19Z) - Are we certain it's anomalous? [57.729669157989235]
Anomaly detection in time series is a complex task since anomalies are rare due to highly non-linear temporal correlations.
Here we propose the novel use of Hyperbolic uncertainty for Anomaly Detection (HypAD)
HypAD learns self-supervisedly to reconstruct the input signal.
arXiv Detail & Related papers (2022-11-16T21:31:39Z) - Attribute-Guided Adversarial Training for Robustness to Natural
Perturbations [64.35805267250682]
We propose an adversarial training approach which learns to generate new samples so as to maximize exposure of the classifier to the attributes-space.
Our approach enables deep neural networks to be robust against a wide range of naturally occurring perturbations.
arXiv Detail & Related papers (2020-12-03T10:17:30Z) - Toward Deep Supervised Anomaly Detection: Reinforcement Learning from
Partially Labeled Anomaly Data [150.9270911031327]
We consider the problem of anomaly detection with a small set of partially labeled anomaly examples and a large-scale unlabeled dataset.
Existing related methods either exclusively fit the limited anomaly examples that typically do not span the entire set of anomalies, or proceed with unsupervised learning from the unlabeled data.
We propose here instead a deep reinforcement learning-based approach that enables an end-to-end optimization of the detection of both labeled and unlabeled anomalies.
arXiv Detail & Related papers (2020-09-15T03:05:39Z) - SUOD: Accelerating Large-Scale Unsupervised Heterogeneous Outlier
Detection [63.253850875265115]
Outlier detection (OD) is a key machine learning (ML) task for identifying abnormal objects from general samples.
We propose a modular acceleration system, called SUOD, to address it.
arXiv Detail & Related papers (2020-03-11T00:22:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.