Out-of-Distribution Detection with Reconstruction Error and
Typicality-based Penalty
- URL: http://arxiv.org/abs/2212.12641v1
- Date: Sat, 24 Dec 2022 03:10:28 GMT
- Title: Out-of-Distribution Detection with Reconstruction Error and
Typicality-based Penalty
- Authors: Genki Osada, Takahashi Tsubasa, Budrul Ahsan, Takashi Nishide
- Abstract summary: We propose a new reconstruction error-based approach that employs normalizing flow (NF)
Because the PRE detects test inputs that lie off the in-distribution manifold, it effectively detects adversarial examples as well as OOD examples.
We show the effectiveness of our method through the evaluation using natural image datasets, CIFAR-10, TinyImageNet, and ILSVRC2012.
- Score: 3.7277730514654555
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The task of out-of-distribution (OOD) detection is vital to realize safe and
reliable operation for real-world applications. After the failure of
likelihood-based detection in high dimensions had been shown, approaches based
on the \emph{typical set} have been attracting attention; however, they still
have not achieved satisfactory performance. Beginning by presenting the failure
case of the typicality-based approach, we propose a new reconstruction
error-based approach that employs normalizing flow (NF). We further introduce a
typicality-based penalty, and by incorporating it into the reconstruction error
in NF, we propose a new OOD detection method, penalized reconstruction error
(PRE). Because the PRE detects test inputs that lie off the in-distribution
manifold, it effectively detects adversarial examples as well as OOD examples.
We show the effectiveness of our method through the evaluation using natural
image datasets, CIFAR-10, TinyImageNet, and ILSVRC2012.
Related papers
- Exploring Out-of-distribution Detection for Sparse-view Computed Tomography with Diffusion Models [1.6704428692159]
We study the use of a diffusion model, trained to capture the target distribution for CT reconstruction as an in-distribution prior.
We employ the model to reconstruct partially diffused input images and assess OOD-ness through multiple reconstruction errors.
Our findings suggest that effective OOD detection can be achieved by comparing measurements with forward-projected reconstructions.
arXiv Detail & Related papers (2024-11-09T23:17:42Z) - Projection Regret: Reducing Background Bias for Novelty Detection via
Diffusion Models [72.07462371883501]
We propose emphProjection Regret (PR), an efficient novelty detection method that mitigates the bias of non-semantic information.
PR computes the perceptual distance between the test image and its diffusion-based projection to detect abnormality.
Extensive experiments demonstrate that PR outperforms the prior art of generative-model-based novelty detection methods by a significant margin.
arXiv Detail & Related papers (2023-12-05T09:44:47Z) - Video Anomaly Detection via Spatio-Temporal Pseudo-Anomaly Generation : A Unified Approach [49.995833831087175]
This work proposes a novel method for generating generic Video-temporal PAs by inpainting a masked out region of an image.
In addition, we present a simple unified framework to detect real-world anomalies under the OCC setting.
Our method performs on par with other existing state-of-the-art PAs generation and reconstruction based methods under the OCC setting.
arXiv Detail & Related papers (2023-11-27T13:14:06Z) - Free Lunch for Generating Effective Outlier Supervision [46.37464572099351]
We propose an ultra-effective method to generate near-realistic outlier supervision.
Our proposed textttBayesAug significantly reduces the false positive rate over 12.50% compared with the previous schemes.
arXiv Detail & Related papers (2023-01-17T01:46:45Z) - Self-Supervised Training with Autoencoders for Visual Anomaly Detection [61.62861063776813]
We focus on a specific use case in anomaly detection where the distribution of normal samples is supported by a lower-dimensional manifold.
We adapt a self-supervised learning regime that exploits discriminative information during training but focuses on the submanifold of normal examples.
We achieve a new state-of-the-art result on the MVTec AD dataset -- a challenging benchmark for visual anomaly detection in the manufacturing domain.
arXiv Detail & Related papers (2022-06-23T14:16:30Z) - ReAct: Out-of-distribution Detection With Rectified Activations [20.792140933660075]
Out-of-distribution (OOD) detection has received much attention lately due to its practical importance.
One of the primary challenges is that models often produce highly confident predictions on OOD data.
We propose ReAct--a simple and effective technique for reducing model overconfidence on OOD data.
arXiv Detail & Related papers (2021-11-24T21:02:07Z) - Self-Supervised Predictive Convolutional Attentive Block for Anomaly
Detection [97.93062818228015]
We propose to integrate the reconstruction-based functionality into a novel self-supervised predictive architectural building block.
Our block is equipped with a loss that minimizes the reconstruction error with respect to the masked area in the receptive field.
We demonstrate the generality of our block by integrating it into several state-of-the-art frameworks for anomaly detection on image and video.
arXiv Detail & Related papers (2021-11-17T13:30:31Z) - Learn what you can't learn: Regularized Ensembles for Transductive
Out-of-distribution Detection [76.39067237772286]
We show that current out-of-distribution (OOD) detection algorithms for neural networks produce unsatisfactory results in a variety of OOD detection scenarios.
This paper studies how such "hard" OOD scenarios can benefit from adjusting the detection method after observing a batch of the test data.
We propose a novel method that uses an artificial labeling scheme for the test data and regularization to obtain ensembles of models that produce contradictory predictions only on the OOD samples in a test batch.
arXiv Detail & Related papers (2020-12-10T16:55:13Z) - Interpreting Rate-Distortion of Variational Autoencoder and Using Model
Uncertainty for Anomaly Detection [5.491655566898372]
We build a scalable machine learning system for unsupervised anomaly detection via representation learning.
We revisit VAE from the perspective of information theory to provide some theoretical foundations on using the reconstruction error.
We show empirically the competitive performance of our approach on benchmark datasets.
arXiv Detail & Related papers (2020-05-05T00:03:48Z) - Unsupervised Lesion Detection via Image Restoration with a Normative
Prior [6.495883501989547]
We propose a probabilistic model that uses a network-based prior as the normative distribution and detect lesions pixel-wise using MAP estimation.
Experiments with gliomas and stroke lesions in brain MRI show that the proposed approach outperforms the state-of-the-art unsupervised methods by a substantial margin.
arXiv Detail & Related papers (2020-04-30T18:03:18Z) - Robust Out-of-distribution Detection for Neural Networks [51.19164318924997]
We show that existing detection mechanisms can be extremely brittle when evaluating on in-distribution and OOD inputs.
We propose an effective algorithm called ALOE, which performs robust training by exposing the model to both adversarially crafted inlier and outlier examples.
arXiv Detail & Related papers (2020-03-21T17:46:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.