Model2Detector:Widening the Information Bottleneck for
Out-of-Distribution Detection using a Handful of Gradient Steps
- URL: http://arxiv.org/abs/2202.11226v1
- Date: Tue, 22 Feb 2022 23:03:40 GMT
- Title: Model2Detector:Widening the Information Bottleneck for
Out-of-Distribution Detection using a Handful of Gradient Steps
- Authors: Sumedh A Sontakke, Buvaneswari Ramanan, Laurent Itti, Thomas Woo
- Abstract summary: Out-of-distribution detection is an important capability that has long eluded vanilla neural networks.
Recent advances in inference-time out-of-distribution detection help mitigate some of these problems.
We show how our method consistently outperforms the state-of-the-art in detection accuracy on popular image datasets.
- Score: 12.263417500077383
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Out-of-distribution detection is an important capability that has long eluded
vanilla neural networks. Deep Neural networks (DNNs) tend to generate
over-confident predictions when presented with inputs that are significantly
out-of-distribution (OOD). This can be dangerous when employing machine
learning systems in the wild as detecting attacks can thus be difficult. Recent
advances inference-time out-of-distribution detection help mitigate some of
these problems. However, existing methods can be restrictive as they are often
computationally expensive. Additionally, these methods require training of a
downstream detector model which learns to detect OOD inputs from
in-distribution ones. This, therefore, adds latency during inference. Here, we
offer an information theoretic perspective on why neural networks are
inherently incapable of OOD detection. We attempt to mitigate these flaws by
converting a trained model into a an OOD detector using a handful of steps of
gradient descent. Our work can be employed as a post-processing method whereby
an inference-time ML system can convert a trained model into an OOD detector.
Experimentally, we show how our method consistently outperforms the
state-of-the-art in detection accuracy on popular image datasets while also
reducing computational complexity.
Related papers
- Unleashing Mask: Explore the Intrinsic Out-of-Distribution Detection
Capability [70.72426887518517]
Out-of-distribution (OOD) detection is an indispensable aspect of secure AI when deploying machine learning models in real-world applications.
We propose a novel method, Unleashing Mask, which aims to restore the OOD discriminative capabilities of the well-trained model with ID data.
Our method utilizes a mask to figure out the memorized atypical samples, and then finetune the model or prune it with the introduced mask to forget them.
arXiv Detail & Related papers (2023-06-06T14:23:34Z) - Using Semantic Information for Defining and Detecting OOD Inputs [3.9577682622066264]
Out-of-distribution (OOD) detection has received some attention recently.
We demonstrate that the current detectors inherit the biases in the training dataset.
This can render the current OOD detectors impermeable to inputs lying outside the training distribution but with the same semantic information.
We perform OOD detection on semantic information extracted from the training data of MNIST and COCO datasets.
arXiv Detail & Related papers (2023-02-21T21:31:20Z) - Window-Based Distribution Shift Detection for Deep Neural Networks [21.73028341299301]
We study the case of monitoring the healthy operation of a deep neural network (DNN) receiving a stream of data.
Using selective prediction principles, we propose a distribution deviation detection method for DNNs.
Our novel detection method performs on-par or better than the state-of-the-art, while consuming substantially lower time.
arXiv Detail & Related papers (2022-10-19T21:27:25Z) - Neurosymbolic hybrid approach to driver collision warning [64.02492460600905]
There are two main algorithmic approaches to autonomous driving systems.
Deep learning alone has achieved state-of-the-art results in many areas.
But sometimes it can be very difficult to debug if the deep learning model doesn't work.
arXiv Detail & Related papers (2022-03-28T20:29:50Z) - Out-of-Distribution Example Detection in Deep Neural Networks using
Distance to Modelled Embedding [0.0]
We present Distance to Modelled Embedding (DIME) that we use to detect out-of-distribution examples during prediction time.
By approximating the training set embedding into feature space as a linear hyperplane, we derive a simple, unsupervised, highly performant and computationally efficient method.
arXiv Detail & Related papers (2021-08-24T12:28:04Z) - Out-of-Distribution Detection using Outlier Detection Methods [0.0]
Out-of-distribution detection (OOD) deals with anomalous input to neural networks.
We use outlier detection algorithms to detect anomalous input as reliable as specialized methods from the field of OOD.
No neural network adaptation is required; detection is based on the model's softmax score.
arXiv Detail & Related papers (2021-08-18T16:05:53Z) - DAAIN: Detection of Anomalous and Adversarial Input using Normalizing
Flows [52.31831255787147]
We introduce a novel technique, DAAIN, to detect out-of-distribution (OOD) inputs and adversarial attacks (AA)
Our approach monitors the inner workings of a neural network and learns a density estimator of the activation distribution.
Our model can be trained on a single GPU making it compute efficient and deployable without requiring specialized accelerators.
arXiv Detail & Related papers (2021-05-30T22:07:13Z) - Learn what you can't learn: Regularized Ensembles for Transductive
Out-of-distribution Detection [76.39067237772286]
We show that current out-of-distribution (OOD) detection algorithms for neural networks produce unsatisfactory results in a variety of OOD detection scenarios.
This paper studies how such "hard" OOD scenarios can benefit from adjusting the detection method after observing a batch of the test data.
We propose a novel method that uses an artificial labeling scheme for the test data and regularization to obtain ensembles of models that produce contradictory predictions only on the OOD samples in a test batch.
arXiv Detail & Related papers (2020-12-10T16:55:13Z) - A Novel Anomaly Detection Algorithm for Hybrid Production Systems based
on Deep Learning and Timed Automata [73.38551379469533]
DAD:DeepAnomalyDetection is a new approach for automatic model learning and anomaly detection in hybrid production systems.
It combines deep learning and timed automata for creating behavioral model from observations.
The algorithm has been applied to few data sets including two from real systems and has shown promising results.
arXiv Detail & Related papers (2020-10-29T08:27:43Z) - Scalable Backdoor Detection in Neural Networks [61.39635364047679]
Deep learning models are vulnerable to Trojan attacks, where an attacker can install a backdoor during training time to make the resultant model misidentify samples contaminated with a small trigger patch.
We propose a novel trigger reverse-engineering based approach whose computational complexity does not scale with the number of labels, and is based on a measure that is both interpretable and universal across different network and patch types.
In experiments, we observe that our method achieves a perfect score in separating Trojaned models from pure models, which is an improvement over the current state-of-the art method.
arXiv Detail & Related papers (2020-06-10T04:12:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.