Are all outliers alike? On Understanding the Diversity of Outliers for
Detecting OODs
- URL: http://arxiv.org/abs/2103.12628v1
- Date: Tue, 23 Mar 2021 15:33:58 GMT
- Title: Are all outliers alike? On Understanding the Diversity of Outliers for
Detecting OODs
- Authors: Ramneet Kaur, Susmit Jha, Anirban Roy, Oleg Sokolsky, Insup Lee
- Abstract summary: This paper presents a taxonomy of OOD outlier inputs based on their source and nature of uncertainty.
We develop a novel integrated detection approach that uses multiple attributes corresponding to different types of outliers.
- Score: 11.211251493663267
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep neural networks (DNNs) are known to produce incorrect predictions with
very high confidence on out-of-distribution (OOD) inputs. This limitation is
one of the key challenges in the adoption of deep learning models in
high-assurance systems such as autonomous driving, air traffic management, and
medical diagnosis. This challenge has received significant attention recently,
and several techniques have been developed to detect inputs where the model's
prediction cannot be trusted. These techniques use different statistical,
geometric, or topological signatures. This paper presents a taxonomy of OOD
outlier inputs based on their source and nature of uncertainty. We demonstrate
how different existing detection approaches fail to detect certain types of
outliers. We utilize these insights to develop a novel integrated detection
approach that uses multiple attributes corresponding to different types of
outliers. Our results include experiments on CIFAR10, SVHN and MNIST as
in-distribution data and Imagenet, LSUN, SVHN (for CIFAR10), CIFAR10 (for
SVHN), KMNIST, and F-MNIST as OOD data across different DNN architectures such
as ResNet34, WideResNet, DenseNet, and LeNet5.
Related papers
- What If the Input is Expanded in OOD Detection? [77.37433624869857]
Out-of-distribution (OOD) detection aims to identify OOD inputs from unknown classes.
Various scoring functions are proposed to distinguish it from in-distribution (ID) data.
We introduce a novel perspective, i.e., employing different common corruptions on the input space.
arXiv Detail & Related papers (2024-10-24T06:47:28Z) - TEN-GUARD: Tensor Decomposition for Backdoor Attack Detection in Deep
Neural Networks [3.489779105594534]
We introduce a novel approach to backdoor detection using two tensor decomposition methods applied to network activations.
This has a number of advantages relative to existing detection methods, including the ability to analyze multiple models at the same time.
Results show that our method detects backdoored networks more accurately and efficiently than current state-of-the-art methods.
arXiv Detail & Related papers (2024-01-06T03:08:28Z) - Deep Neural Networks Tend To Extrapolate Predictably [51.303814412294514]
neural network predictions tend to be unpredictable and overconfident when faced with out-of-distribution (OOD) inputs.
We observe that neural network predictions often tend towards a constant value as input data becomes increasingly OOD.
We show how one can leverage our insights in practice to enable risk-sensitive decision-making in the presence of OOD inputs.
arXiv Detail & Related papers (2023-10-02T03:25:32Z) - Energy-based Out-of-Distribution Detection for Graph Neural Networks [76.0242218180483]
We propose a simple, powerful and efficient OOD detection model for GNN-based learning on graphs, which we call GNNSafe.
GNNSafe achieves up to $17.0%$ AUROC improvement over state-of-the-arts and it could serve as simple yet strong baselines in such an under-developed area.
arXiv Detail & Related papers (2023-02-06T16:38:43Z) - Igeood: An Information Geometry Approach to Out-of-Distribution
Detection [35.04325145919005]
We introduce Igeood, an effective method for detecting out-of-distribution (OOD) samples.
Igeood applies to any pre-trained neural network, works under various degrees of access to the machine learning model.
We show that Igeood outperforms competing state-of-the-art methods on a variety of network architectures and datasets.
arXiv Detail & Related papers (2022-03-15T11:26:35Z) - Detecting OODs as datapoints with High Uncertainty [12.040347694782007]
Deep neural networks (DNNs) are known to produce incorrect predictions with very high confidence on out-of-distribution inputs (OODs)
This limitation is one of the key challenges in the adoption of DNNs in high-assurance systems such as autonomous driving, air traffic management, and medical diagnosis.
Several techniques have been developed to detect inputs where the model's prediction cannot be trusted.
We demonstrate the difference in the detection ability of these techniques and propose an ensemble approach for detection of OODs as datapoints with high uncertainty (epistemic or aleatoric)
arXiv Detail & Related papers (2021-08-13T20:07:42Z) - Increasing the Confidence of Deep Neural Networks by Coverage Analysis [71.57324258813674]
This paper presents a lightweight monitoring architecture based on coverage paradigms to enhance the model against different unsafe inputs.
Experimental results show that the proposed approach is effective in detecting both powerful adversarial examples and out-of-distribution inputs.
arXiv Detail & Related papers (2021-01-28T16:38:26Z) - Boosting Deep Neural Networks with Geometrical Prior Knowledge: A Survey [77.99182201815763]
Deep Neural Networks (DNNs) achieve state-of-the-art results in many different problem settings.
DNNs are often treated as black box systems, which complicates their evaluation and validation.
One promising field, inspired by the success of convolutional neural networks (CNNs) in computer vision tasks, is to incorporate knowledge about symmetric geometrical transformations.
arXiv Detail & Related papers (2020-06-30T14:56:05Z) - A cognitive based Intrusion detection system [0.0]
Intrusion detection is one of the important mechanisms that provide computer networks security.
This paper proposes a new approach based on Deep Neural Network ans Support vector machine classifier.
The proposed model predicts the attacks with better accuracy for intrusion detection rather similar methods.
arXiv Detail & Related papers (2020-05-19T13:30:30Z) - Diversity inducing Information Bottleneck in Model Ensembles [73.80615604822435]
In this paper, we target the problem of generating effective ensembles of neural networks by encouraging diversity in prediction.
We explicitly optimize a diversity inducing adversarial loss for learning latent variables and thereby obtain diversity in the output predictions necessary for modeling multi-modal data.
Compared to the most competitive baselines, we show significant improvements in classification accuracy, under a shift in the data distribution.
arXiv Detail & Related papers (2020-03-10T03:10:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.