Gradient-based Novelty Detection Boosted by Self-supervised Binary
Classification
- URL: http://arxiv.org/abs/2112.09815v1
- Date: Sat, 18 Dec 2021 01:17:15 GMT
- Title: Gradient-based Novelty Detection Boosted by Self-supervised Binary
Classification
- Authors: Jingbo Sun, Li Yang, Jiaxin Zhang, Frank Liu, Mahantesh Halappanavar,
Deliang Fan, Yu Cao
- Abstract summary: Novelty detection aims to automatically identify out-of-distribution (OOD) data, without any prior knowledge of them.
We propose a novel, self-supervised approach that does not rely on any pre-defined OOD data.
In the evaluation with multiple datasets, the proposed approach consistently outperforms state-of-the-art supervised and unsupervised methods.
- Score: 20.715158729811755
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Novelty detection aims to automatically identify out-of-distribution (OOD)
data, without any prior knowledge of them. It is a critical step in data
monitoring, behavior analysis and other applications, helping enable continual
learning in the field. Conventional methods of OOD detection perform
multi-variate analysis on an ensemble of data or features, and usually resort
to the supervision with OOD data to improve the accuracy. In reality, such
supervision is impractical as one cannot anticipate the anomalous data. In this
paper, we propose a novel, self-supervised approach that does not rely on any
pre-defined OOD data: (1) The new method evaluates the Mahalanobis distance of
the gradients between the in-distribution and OOD data. (2) It is assisted by a
self-supervised binary classifier to guide the label selection to generate the
gradients, and maximize the Mahalanobis distance. In the evaluation with
multiple datasets, such as CIFAR-10, CIFAR-100, SVHN and TinyImageNet, the
proposed approach consistently outperforms state-of-the-art supervised and
unsupervised methods in the area under the receiver operating characteristic
(AUROC) and area under the precision-recall curve (AUPR) metrics. We further
demonstrate that this detector is able to accurately learn one OOD class in
continual learning.
Related papers
- Unifying Unsupervised Graph-Level Anomaly Detection and Out-of-Distribution Detection: A Benchmark [73.58840254552656]
Unsupervised graph-level anomaly detection (GLAD) and unsupervised graph-level out-of-distribution (OOD) detection have received significant attention in recent years.
We present a Unified Benchmark for unsupervised Graph-level OOD and anomaly Detection (our method)
Our benchmark encompasses 35 datasets spanning four practical anomaly and OOD detection scenarios.
We conduct multi-dimensional analyses to explore the effectiveness, generalizability, robustness, and efficiency of existing methods.
arXiv Detail & Related papers (2024-06-21T04:07:43Z) - EAT: Towards Long-Tailed Out-of-Distribution Detection [55.380390767978554]
This paper addresses the challenging task of long-tailed OOD detection.
The main difficulty lies in distinguishing OOD data from samples belonging to the tail classes.
We propose two simple ideas: (1) Expanding the in-distribution class space by introducing multiple abstention classes, and (2) Augmenting the context-limited tail classes by overlaying images onto the context-rich OOD data.
arXiv Detail & Related papers (2023-12-14T13:47:13Z) - Open-World Lifelong Graph Learning [7.535219325248997]
We study the problem of lifelong graph learning in an open-world scenario.
We utilize Out-of-Distribution (OOD) detection methods to recognize new classes.
We suggest performing new class detection by combining OOD detection methods with information aggregated from the graph neighborhood.
arXiv Detail & Related papers (2023-10-19T08:18:10Z) - Conservative Prediction via Data-Driven Confidence Minimization [70.93946578046003]
In safety-critical applications of machine learning, it is often desirable for a model to be conservative.
We propose the Data-Driven Confidence Minimization framework, which minimizes confidence on an uncertainty dataset.
arXiv Detail & Related papers (2023-06-08T07:05:36Z) - LINe: Out-of-Distribution Detection by Leveraging Important Neurons [15.797257361788812]
We introduce a new aspect for analyzing the difference in model outputs between in-distribution data and OOD data.
We propose a novel method, Leveraging Important Neurons (LINe), for post-hoc Out of distribution detection.
arXiv Detail & Related papers (2023-03-24T13:49:05Z) - Unsupervised Evaluation of Out-of-distribution Detection: A Data-centric
Perspective [55.45202687256175]
Out-of-distribution (OOD) detection methods assume that they have test ground truths, i.e., whether individual test samples are in-distribution (IND) or OOD.
In this paper, we are the first to introduce the unsupervised evaluation problem in OOD detection.
We propose three methods to compute Gscore as an unsupervised indicator of OOD detection performance.
arXiv Detail & Related papers (2023-02-16T13:34:35Z) - GOOD-D: On Unsupervised Graph Out-Of-Distribution Detection [67.90365841083951]
We develop a new graph contrastive learning framework GOOD-D for detecting OOD graphs without using any ground-truth labels.
GOOD-D is able to capture the latent ID patterns and accurately detect OOD graphs based on the semantic inconsistency in different granularities.
As a pioneering work in unsupervised graph-level OOD detection, we build a comprehensive benchmark to compare our proposed approach with different state-of-the-art methods.
arXiv Detail & Related papers (2022-11-08T12:41:58Z) - Out-Of-Distribution Detection In Unsupervised Continual Learning [7.800379384628357]
Unsupervised continual learning aims to learn new tasks incrementally without requiring human annotations.
An out-of-distribution detector is required at beginning to identify whether each new data corresponds to a new task.
We propose a novel OOD detection method by correcting the output bias at first and then enhancing the output confidence for in-distribution data.
arXiv Detail & Related papers (2022-04-12T01:24:54Z) - RODD: A Self-Supervised Approach for Robust Out-of-Distribution
Detection [12.341250124228859]
We propose a simple yet effective generalized OOD detection method independent of out-of-distribution datasets.
Our approach relies on self-supervised feature learning of the training samples, where the embeddings lie on a compact low-dimensional space.
We empirically show that a pre-trained model with self-supervised contrastive learning yields a better model for uni-dimensional feature learning in the latent space.
arXiv Detail & Related papers (2022-04-06T03:05:58Z) - Provably Robust Detection of Out-of-distribution Data (almost) for free [124.14121487542613]
Deep neural networks are known to produce highly overconfident predictions on out-of-distribution (OOD) data.
In this paper we propose a novel method where from first principles we combine a certifiable OOD detector with a standard classifier into an OOD aware classifier.
In this way we achieve the best of two worlds: certifiably adversarially robust OOD detection, even for OOD samples close to the in-distribution, without loss in prediction accuracy and close to state-of-the-art OOD detection performance for non-manipulated OOD data.
arXiv Detail & Related papers (2021-06-08T11:40:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.