FOOD: Fast Out-Of-Distribution Detector
- URL: http://arxiv.org/abs/2008.06856v4
- Date: Tue, 23 Feb 2021 16:19:52 GMT
- Title: FOOD: Fast Out-Of-Distribution Detector
- Authors: Guy Amit, Moshe Levy, Ishai Rosenberg, Asaf Shabtai, Yuval Elovici
- Abstract summary: FOOD is an extended deep neural network (DNN) capable of efficiently detecting OOD samples with minimal inference time overhead.
We evaluate FOOD's detection performance on the SVHN, CIFAR-10, and CIFAR-100 datasets.
Our results demonstrate that in addition to achieving state-of-the-art performance, FOOD is fast and applicable to real-world applications.
- Score: 43.31844129399436
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks (DNNs) perform well at classifying inputs associated
with the classes they have been trained on, which are known as in distribution
inputs. However, out-of-distribution (OOD) inputs pose a great challenge to
DNNs and consequently represent a major risk when DNNs are implemented in
safety-critical systems. Extensive research has been performed in the domain of
OOD detection. However, current state-of-the-art methods for OOD detection
suffer from at least one of the following limitations: (1) increased inference
time - this limits existing methods' applicability to many real-world
applications, and (2) the need for OOD training data - such data can be
difficult to acquire and may not be representative enough, thus limiting the
ability of the OOD detector to generalize. In this paper, we propose FOOD --
Fast Out-Of-Distribution detector -- an extended DNN classifier capable of
efficiently detecting OOD samples with minimal inference time overhead. Our
architecture features a DNN with a final Gaussian layer combined with the log
likelihood ratio statistical test and an additional output neuron for OOD
detection. Instead of using real OOD data, we use a novel method to craft
artificial OOD samples from in-distribution data, which are used to train our
OOD detector neuron. We evaluate FOOD's detection performance on the SVHN,
CIFAR-10, and CIFAR-100 datasets. Our results demonstrate that in addition to
achieving state-of-the-art performance, FOOD is fast and applicable to
real-world applications.
Related papers
- Can OOD Object Detectors Learn from Foundation Models? [56.03404530594071]
Out-of-distribution (OOD) object detection is a challenging task due to the absence of open-set OOD data.
Inspired by recent advancements in text-to-image generative models, we study the potential of generative models trained on large-scale open-set data to synthesize OOD samples.
We introduce SyncOOD, a simple data curation method that capitalizes on the capabilities of large foundation models.
arXiv Detail & Related papers (2024-09-08T17:28:22Z) - Detection of out-of-distribution samples using binary neuron activation
patterns [0.26249027950824505]
The ability to identify previously unseen inputs as novel is crucial in safety-critical applications such as self-driving cars, unmanned aerial vehicles, and robots.
Existing approaches to detect OOD samples treat a DNN as a black box and evaluate the confidence score of the output predictions.
In this work, we introduce a novel method for OOD detection. Our method is motivated by theoretical analysis of neuron activation patterns (NAP) in ReLU-based architectures.
arXiv Detail & Related papers (2022-12-29T11:42:46Z) - Pseudo-OOD training for robust language models [78.15712542481859]
OOD detection is a key component of a reliable machine-learning model for any industry-scale application.
We propose POORE - POsthoc pseudo-Ood REgularization, that generates pseudo-OOD samples using in-distribution (IND) data.
We extensively evaluate our framework on three real-world dialogue systems, achieving new state-of-the-art in OOD detection.
arXiv Detail & Related papers (2022-10-17T14:32:02Z) - Igeood: An Information Geometry Approach to Out-of-Distribution
Detection [35.04325145919005]
We introduce Igeood, an effective method for detecting out-of-distribution (OOD) samples.
Igeood applies to any pre-trained neural network, works under various degrees of access to the machine learning model.
We show that Igeood outperforms competing state-of-the-art methods on a variety of network architectures and datasets.
arXiv Detail & Related papers (2022-03-15T11:26:35Z) - ProtoInfoMax: Prototypical Networks with Mutual Information Maximization
for Out-of-Domain Detection [19.61846393392849]
ProtoInfoMax is a new architecture that extends Prototypical Networks to simultaneously process In-Domain (ID) and OOD sentences.
We show that our proposed method can substantially improve performance up to 20% for OOD detection in low resource settings.
arXiv Detail & Related papers (2021-08-27T11:55:34Z) - Provably Robust Detection of Out-of-distribution Data (almost) for free [124.14121487542613]
Deep neural networks are known to produce highly overconfident predictions on out-of-distribution (OOD) data.
In this paper we propose a novel method where from first principles we combine a certifiable OOD detector with a standard classifier into an OOD aware classifier.
In this way we achieve the best of two worlds: certifiably adversarially robust OOD detection, even for OOD samples close to the in-distribution, without loss in prediction accuracy and close to state-of-the-art OOD detection performance for non-manipulated OOD data.
arXiv Detail & Related papers (2021-06-08T11:40:49Z) - MOOD: Multi-level Out-of-distribution Detection [13.207044902083057]
Out-of-distribution (OOD) detection is essential to prevent anomalous inputs from causing a model to fail during deployment.
We propose a novel framework, multi-level out-of-distribution detection MOOD, which exploits intermediate classifier outputs for dynamic and efficient OOD inference.
MOOD achieves up to 71.05% computational reduction in inference, while maintaining competitive OOD detection performance.
arXiv Detail & Related papers (2021-04-30T02:18:31Z) - Learn what you can't learn: Regularized Ensembles for Transductive
Out-of-distribution Detection [76.39067237772286]
We show that current out-of-distribution (OOD) detection algorithms for neural networks produce unsatisfactory results in a variety of OOD detection scenarios.
This paper studies how such "hard" OOD scenarios can benefit from adjusting the detection method after observing a batch of the test data.
We propose a novel method that uses an artificial labeling scheme for the test data and regularization to obtain ensembles of models that produce contradictory predictions only on the OOD samples in a test batch.
arXiv Detail & Related papers (2020-12-10T16:55:13Z) - Robust Out-of-distribution Detection for Neural Networks [51.19164318924997]
We show that existing detection mechanisms can be extremely brittle when evaluating on in-distribution and OOD inputs.
We propose an effective algorithm called ALOE, which performs robust training by exposing the model to both adversarially crafted inlier and outlier examples.
arXiv Detail & Related papers (2020-03-21T17:46:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.