Cascade Subspace Clustering for Outlier Detection
- URL: http://arxiv.org/abs/2306.13500v1
- Date: Fri, 23 Jun 2023 13:48:08 GMT
- Title: Cascade Subspace Clustering for Outlier Detection
- Authors: Qi Yang and Hao Zhu
- Abstract summary: We propose a new outlier detection framework that combines a series of weak "outlier detectors" into a single strong one in an iterative fashion.
The residual of the self-representation is used for the next stage to learn the next weaker outlier detector.
Experimental results on image and speaker datasets demonstrate its superiority with respect to state-of-the-art sparse and low-rank outlier detection methods.
- Score: 11.96739972748918
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Many methods based on sparse and low-rank representation been developed along
with guarantees of correct outlier detection. Self-representation states that a
point in a subspace can always be expressed as a linear combination of other
points in the subspace. A suitable Markov Chain can be defined on the
self-representation and it allows us to recognize the difference between
inliers and outliers. However, the reconstruction error of self-representation
that is still informative to detect outlier detection, is neglected.Inspired by
the gradient boosting, in this paper, we propose a new outlier detection
framework that combines a series of weak "outlier detectors" into a single
strong one in an iterative fashion by constructing multi-pass
self-representation. At each stage, we construct a self-representation based on
elastic-net and define a suitable Markov Chain on it to detect outliers. The
residual of the self-representation is used for the next stage to learn the
next weaker outlier detector. Such a stage will repeat many times. And the
final decision of outliers is generated by the previous all results.
Experimental results on image and speaker datasets demonstrate its superiority
with respect to state-of-the-art sparse and low-rank outlier detection methods.
Related papers
- Regularized Contrastive Partial Multi-view Outlier Detection [76.77036536484114]
We propose a novel method named Regularized Contrastive Partial Multi-view Outlier Detection (RCPMOD)
In this framework, we utilize contrastive learning to learn view-consistent information and distinguish outliers by the degree of consistency.
Experimental results on four benchmark datasets demonstrate that our proposed approach could outperform state-of-the-art competitors.
arXiv Detail & Related papers (2024-08-02T14:34:27Z) - Rethinking Unsupervised Outlier Detection via Multiple Thresholding [15.686139522490189]
We propose a multiple thresholding (Multi-T) module to advance existing scoring methods.
It generates two thresholds that isolate inliers and outliers from the unlabelled target dataset.
Experiments verify that Multi-T can significantly improve proposed outlier scoring methods.
arXiv Detail & Related papers (2024-07-07T14:09:50Z) - Quantile-based Maximum Likelihood Training for Outlier Detection [5.902139925693801]
We introduce a quantile-based maximum likelihood objective for learning the inlier distribution to improve the outlier separation during inference.
Our approach fits a normalizing flow to pre-trained discriminative features and detects the outliers according to the evaluated log-likelihood.
arXiv Detail & Related papers (2023-08-20T22:27:54Z) - Robust Outlier Rejection for 3D Registration with Variational Bayes [70.98659381852787]
We develop a novel variational non-local network-based outlier rejection framework for robust alignment.
We propose a voting-based inlier searching strategy to cluster the high-quality hypothetical inliers for transformation estimation.
arXiv Detail & Related papers (2023-04-04T03:48:56Z) - Do We Really Need to Learn Representations from In-domain Data for
Outlier Detection? [6.445605125467574]
Methods based on the two-stage framework achieve state-of-the-art performance on this task.
We explore the possibility of avoiding the high cost of training a distinct representation for each outlier detection task.
In experiments, we demonstrate competitive or better performance on a variety of outlier detection benchmarks compared with previous two-stage methods.
arXiv Detail & Related papers (2021-05-19T17:30:28Z) - Learning and Evaluating Representations for Deep One-class
Classification [59.095144932794646]
We present a two-stage framework for deep one-class classification.
We first learn self-supervised representations from one-class data, and then build one-class classifiers on learned representations.
In experiments, we demonstrate state-of-the-art performance on visual domain one-class classification benchmarks.
arXiv Detail & Related papers (2020-11-04T23:33:41Z) - Anomaly Detection based on Zero-Shot Outlier Synthesis and Hierarchical
Feature Distillation [2.580765958706854]
Synthetically generated anomalies are a solution to such ill or not fully defined data.
We propose a two-level hierarchical latent space representation that distills inliers' feature-descriptors.
We select those that lie on the outskirts of the training data as synthetic-outlier generators.
arXiv Detail & Related papers (2020-10-10T23:34:02Z) - Understanding Classifier Mistakes with Generative Models [88.20470690631372]
Deep neural networks are effective on supervised learning tasks, but have been shown to be brittle.
In this paper, we leverage generative models to identify and characterize instances where classifiers fail to generalize.
Our approach is agnostic to class labels from the training set which makes it applicable to models trained in a semi-supervised way.
arXiv Detail & Related papers (2020-10-05T22:13:21Z) - Detection as Regression: Certified Object Detection by Median Smoothing [50.89591634725045]
This work is motivated by recent progress on certified classification by randomized smoothing.
We obtain the first model-agnostic, training-free, and certified defense for object detection against $ell$-bounded attacks.
arXiv Detail & Related papers (2020-07-07T18:40:19Z) - Certified Robustness to Label-Flipping Attacks via Randomized Smoothing [105.91827623768724]
Machine learning algorithms are susceptible to data poisoning attacks.
We present a unifying view of randomized smoothing over arbitrary functions.
We propose a new strategy for building classifiers that are pointwise-certifiably robust to general data poisoning attacks.
arXiv Detail & Related papers (2020-02-07T21:28:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.