DOODLER: Determining Out-Of-Distribution Likelihood from Encoder
Reconstructions
- URL: http://arxiv.org/abs/2109.13237v1
- Date: Mon, 27 Sep 2021 14:54:55 GMT
- Title: DOODLER: Determining Out-Of-Distribution Likelihood from Encoder
Reconstructions
- Authors: Jonathan S. Kent, Bo Li
- Abstract summary: This paper introduces and examines a novel methodology, DOODLER, for Out-Of-Distribution Detection.
By training a Variational Auto-Encoder on the same data as another Deep Learning model, the VAE learns to accurately reconstruct In-Distribution (ID) inputs, but not to reconstruct OOD inputs.
Unlike other work in the area, DOODLER requires only very weak assumptions about the existence of an OOD dataset, allowing for more realistic application.
- Score: 6.577622354490276
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Deep Learning models possess two key traits that, in combination, make their
use in the real world a risky prospect. One, they do not typically generalize
well outside of the distribution for which they were trained, and two, they
tend to exhibit confident behavior regardless of whether or not they are
producing meaningful outputs. While Deep Learning possesses immense power to
solve realistic, high-dimensional problems, these traits in concert make it
difficult to have confidence in their real-world applications. To overcome this
difficulty, the task of Out-Of-Distribution (OOD) Detection has been defined,
to determine when a model has received an input from outside of the
distribution for which it is trained to operate.
This paper introduces and examines a novel methodology, DOODLER, for OOD
Detection, which directly leverages the traits which result in its necessity.
By training a Variational Auto-Encoder (VAE) on the same data as another Deep
Learning model, the VAE learns to accurately reconstruct In-Distribution (ID)
inputs, but not to reconstruct OOD inputs, meaning that its failure state can
be used to perform OOD Detection. Unlike other work in the area, DOODLER
requires only very weak assumptions about the existence of an OOD dataset,
allowing for more realistic application. DOODLER also enables pixel-wise
segmentations of input images by OOD likelihood, and experimental results show
that it matches or outperforms methodologies that operate under the same
constraints.
Related papers
- Action-OOD: An End-to-End Skeleton-Based Model for Robust Out-of-Distribution Human Action Detection [17.85872085904999]
Action-OOD is a novel end-to-end skeleton-based model for action detection.
We introduce an attention-based feature fusion block, which enhances the model's capability to recognize unknown classes.
We demonstrate the superior performance of our proposed approach compared to state-of-the-art methods.
arXiv Detail & Related papers (2024-05-31T05:49:37Z) - Out-of-distribution Detection Learning with Unreliable
Out-of-distribution Sources [73.28967478098107]
Out-of-distribution (OOD) detection discerns OOD data where the predictor cannot make valid predictions as in-distribution (ID) data.
It is typically hard to collect real out-of-distribution (OOD) data for training a predictor capable of discerning OOD patterns.
We propose a data generation-based learning method named Auxiliary Task-based OOD Learning (ATOL) that can relieve the mistaken OOD generation.
arXiv Detail & Related papers (2023-11-06T16:26:52Z) - OOD Aware Supervised Contrastive Learning [13.329080722482187]
Out-of-Distribution (OOD) detection is a crucial problem for the safe deployment of machine learning models.
We leverage powerful representation learned with Supervised Contrastive (SupCon) training and propose a holistic approach to learn a robust to OOD data.
Our solution is simple and efficient and acts as a natural extension of the closed-set supervised contrastive representation learning.
arXiv Detail & Related papers (2023-10-03T10:38:39Z) - Unsupervised Out-of-Distribution Detection by Restoring Lossy Inputs
with Variational Autoencoder [3.498694457257263]
We propose a novel VAE-based score called Error Reduction (ER) for OOD detection.
ER is based on a VAE that takes a lossy version of the training set as inputs and the original set as targets.
arXiv Detail & Related papers (2023-09-05T09:42:15Z) - Unleashing Mask: Explore the Intrinsic Out-of-Distribution Detection
Capability [70.72426887518517]
Out-of-distribution (OOD) detection is an indispensable aspect of secure AI when deploying machine learning models in real-world applications.
We propose a novel method, Unleashing Mask, which aims to restore the OOD discriminative capabilities of the well-trained model with ID data.
Our method utilizes a mask to figure out the memorized atypical samples, and then finetune the model or prune it with the introduced mask to forget them.
arXiv Detail & Related papers (2023-06-06T14:23:34Z) - Out-of-distribution Detection with Implicit Outlier Transformation [72.73711947366377]
Outlier exposure (OE) is powerful in out-of-distribution (OOD) detection.
We propose a novel OE-based approach that makes the model perform well for unseen OOD situations.
arXiv Detail & Related papers (2023-03-09T04:36:38Z) - Using Semantic Information for Defining and Detecting OOD Inputs [3.9577682622066264]
Out-of-distribution (OOD) detection has received some attention recently.
We demonstrate that the current detectors inherit the biases in the training dataset.
This can render the current OOD detectors impermeable to inputs lying outside the training distribution but with the same semantic information.
We perform OOD detection on semantic information extracted from the training data of MNIST and COCO datasets.
arXiv Detail & Related papers (2023-02-21T21:31:20Z) - Harnessing Out-Of-Distribution Examples via Augmenting Content and Style [93.21258201360484]
Machine learning models are vulnerable to Out-Of-Distribution (OOD) examples.
This paper proposes a HOOD method that can leverage the content and style from each image instance to identify benign and malign OOD data.
Thanks to the proposed novel disentanglement and data augmentation techniques, HOOD can effectively deal with OOD examples in unknown and open environments.
arXiv Detail & Related papers (2022-07-07T08:48:59Z) - Breaking Down Out-of-Distribution Detection: Many Methods Based on OOD
Training Data Estimate a Combination of the Same Core Quantities [104.02531442035483]
The goal of this paper is to recognize common objectives as well as to identify the implicit scoring functions of different OOD detection methods.
We show that binary discrimination between in- and (different) out-distributions is equivalent to several distinct formulations of the OOD detection problem.
We also show that the confidence loss which is used by Outlier Exposure has an implicit scoring function which differs in a non-trivial fashion from the theoretically optimal scoring function.
arXiv Detail & Related papers (2022-06-20T16:32:49Z) - OODformer: Out-Of-Distribution Detection Transformer [15.17006322500865]
In real-world safety-critical applications, it is important to be aware if a new data point is OOD.
This paper proposes a first-of-its-kind OOD detection architecture named OODformer.
arXiv Detail & Related papers (2021-07-19T15:46:38Z) - Learn what you can't learn: Regularized Ensembles for Transductive
Out-of-distribution Detection [76.39067237772286]
We show that current out-of-distribution (OOD) detection algorithms for neural networks produce unsatisfactory results in a variety of OOD detection scenarios.
This paper studies how such "hard" OOD scenarios can benefit from adjusting the detection method after observing a batch of the test data.
We propose a novel method that uses an artificial labeling scheme for the test data and regularization to obtain ensembles of models that produce contradictory predictions only on the OOD samples in a test batch.
arXiv Detail & Related papers (2020-12-10T16:55:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.