READ: Aggregating Reconstruction Error into Out-of-distribution
Detection
- URL: http://arxiv.org/abs/2206.07459v1
- Date: Wed, 15 Jun 2022 11:30:41 GMT
- Title: READ: Aggregating Reconstruction Error into Out-of-distribution
Detection
- Authors: Wenyu Jiang, Hao Cheng, Mingcai Chen, Shuai Feng, Yuxin Ge, Chongjun
Wang
- Abstract summary: Deep neural networks are known to be overconfident for abnormal data.
We propose READ (Reconstruction Error Aggregated Detector) to unify inconsistencies from classifier and autoencoder.
Our method reduces the average FPR@95TPR by up to 9.8% compared with previous state-of-the-art OOD detection algorithms.
- Score: 5.069442437365223
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Detecting out-of-distribution (OOD) samples is crucial to the safe deployment
of a classifier in the real world. However, deep neural networks are known to
be overconfident for abnormal data. Existing works directly design score
function by mining the inconsistency from classifier for in-distribution (ID)
and OOD. In this paper, we further complement this inconsistency with
reconstruction error, based on the assumption that an autoencoder trained on ID
data can not reconstruct OOD as well as ID. We propose a novel method, READ
(Reconstruction Error Aggregated Detector), to unify inconsistencies from
classifier and autoencoder. Specifically, the reconstruction error of raw
pixels is transformed to latent space of classifier. We show that the
transformed reconstruction error bridges the semantic gap and inherits
detection performance from the original. Moreover, we propose an adjustment
strategy to alleviate the overconfidence problem of autoencoder according to a
fine-grained characterization of OOD data. Under two scenarios of pre-training
and retraining, we respectively present two variants of our method, namely
READ-MD (Mahalanobis Distance) only based on pre-trained classifier and READ-ED
(Euclidean Distance) which retrains the classifier. Our methods do not require
access to test time OOD data for fine-tuning hyperparameters. Finally, we
demonstrate the effectiveness of the proposed methods through extensive
comparisons with state-of-the-art OOD detection algorithms. On a CIFAR-10
pre-trained WideResNet, our method reduces the average FPR@95TPR by up to 9.8%
compared with previous state-of-the-art.
Related papers
- What If the Input is Expanded in OOD Detection? [77.37433624869857]
Out-of-distribution (OOD) detection aims to identify OOD inputs from unknown classes.
Various scoring functions are proposed to distinguish it from in-distribution (ID) data.
We introduce a novel perspective, i.e., employing different common corruptions on the input space.
arXiv Detail & Related papers (2024-10-24T06:47:28Z) - Representation Norm Amplification for Out-of-Distribution Detection in Long-Tail Learning [10.696635172502141]
We introduce our method, called textitRepresentation Norm Amplification (RNA), which solves the problem of detecting out-of-distribution samples.
Experiments show that RNA achieves superior performance in both OOD detection and classification compared to the state-of-the-art methods.
arXiv Detail & Related papers (2024-08-20T09:27:07Z) - How Does Unlabeled Data Provably Help Out-of-Distribution Detection? [63.41681272937562]
Unlabeled in-the-wild data is non-trivial due to the heterogeneity of both in-distribution (ID) and out-of-distribution (OOD) data.
This paper introduces a new learning framework SAL (Separate And Learn) that offers both strong theoretical guarantees and empirical effectiveness.
arXiv Detail & Related papers (2024-02-05T20:36:33Z) - Spot The Odd One Out: Regularized Complete Cycle Consistent Anomaly Detector GAN [4.5123329001179275]
This study presents an adversarial method for anomaly detection in real-world applications, leveraging the power of generative adversarial neural networks (GANs)
Previous methods suffer from the high variance between class-wise accuracy which leads to not being applicable for all types of anomalies.
The proposed method named RCALAD tries to solve this problem by introducing a novel discriminator to the structure, which results in a more efficient training process.
arXiv Detail & Related papers (2023-04-16T13:05:39Z) - Connective Reconstruction-based Novelty Detection [3.7706789983985303]
Deep learning has enabled us to analyze real-world data which contain unexplained samples.
GAN-based approaches have been widely used to address this problem due to their ability to perform distribution fitting.
We propose a simple yet efficient reconstruction-based method that avoids adding complexities to compensate for the limitations of GAN models.
arXiv Detail & Related papers (2022-10-25T11:09:39Z) - Rethinking Reconstruction Autoencoder-Based Out-of-Distribution
Detection [0.0]
Reconstruction autoencoder-based methods deal with the problem by using input reconstruction error as a metric of novelty vs. normality.
We introduce semantic reconstruction, data certainty decomposition and normalized L2 distance to substantially improve original methods.
Our method works without any additional data, hard-to-implement structure, time-consuming pipeline, and even harming the classification accuracy of known classes.
arXiv Detail & Related papers (2022-03-04T09:04:55Z) - Robust lEarned Shrinkage-Thresholding (REST): Robust unrolling for
sparse recover [87.28082715343896]
We consider deep neural networks for solving inverse problems that are robust to forward model mis-specifications.
We design a new robust deep neural network architecture by applying algorithm unfolding techniques to a robust version of the underlying recovery problem.
The proposed REST network is shown to outperform state-of-the-art model-based and data-driven algorithms in both compressive sensing and radar imaging problems.
arXiv Detail & Related papers (2021-10-20T06:15:45Z) - Semantic Perturbations with Normalizing Flows for Improved
Generalization [62.998818375912506]
We show that perturbations in the latent space can be used to define fully unsupervised data augmentations.
We find that our latent adversarial perturbations adaptive to the classifier throughout its training are most effective.
arXiv Detail & Related papers (2021-08-18T03:20:00Z) - Provably Robust Detection of Out-of-distribution Data (almost) for free [124.14121487542613]
Deep neural networks are known to produce highly overconfident predictions on out-of-distribution (OOD) data.
In this paper we propose a novel method where from first principles we combine a certifiable OOD detector with a standard classifier into an OOD aware classifier.
In this way we achieve the best of two worlds: certifiably adversarially robust OOD detection, even for OOD samples close to the in-distribution, without loss in prediction accuracy and close to state-of-the-art OOD detection performance for non-manipulated OOD data.
arXiv Detail & Related papers (2021-06-08T11:40:49Z) - Unsupervised Anomaly Detection with Adversarial Mirrored AutoEncoders [51.691585766702744]
We propose a variant of Adversarial Autoencoder which uses a mirrored Wasserstein loss in the discriminator to enforce better semantic-level reconstruction.
We put forward an alternative measure of anomaly score to replace the reconstruction-based metric.
Our method outperforms the current state-of-the-art methods for anomaly detection on several OOD detection benchmarks.
arXiv Detail & Related papers (2020-03-24T08:26:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.