WeShort: Out-of-distribution Detection With Weak Shortcut structure
- URL: http://arxiv.org/abs/2207.05055v2
- Date: Wed, 13 Jul 2022 01:02:00 GMT
- Title: WeShort: Out-of-distribution Detection With Weak Shortcut structure
- Authors: Jinhong Lin
- Abstract summary: We propose a simple and effective post-hoc technique, WeShort, to reduce the overconfidence of neural networks on OOD data.
Our method is compatible with different OOD detection scores and can generalize well to different architectures of networks.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neural networks have achieved impressive performance for data in the
distribution which is the same as the training set but can produce an
overconfident incorrect result for the data these networks have never seen.
Therefore, it is essential to detect whether inputs come from
out-of-distribution(OOD) in order to guarantee the safety of neural networks
deployed in the real world. In this paper, we propose a simple and effective
post-hoc technique, WeShort, to reduce the overconfidence of neural networks on
OOD data. Our method is inspired by the observation of the internal residual
structure, which shows the separation of the OOD and in-distribution (ID) data
in the shortcut layer. Our method is compatible with different OOD detection
scores and can generalize well to different architectures of networks. We
demonstrate our method on various OOD datasets to show its competitive
performances and provide reasonable hypotheses to explain why our method works.
On the ImageNet benchmark, Weshort achieves state-of-the-art performance on the
false positive rate (FPR95) and the area under the receiver operating
characteristic (AUROC) on the family of post-hoc methods.
Related papers
- What If the Input is Expanded in OOD Detection? [77.37433624869857]
Out-of-distribution (OOD) detection aims to identify OOD inputs from unknown classes.
Various scoring functions are proposed to distinguish it from in-distribution (ID) data.
We introduce a novel perspective, i.e., employing different common corruptions on the input space.
arXiv Detail & Related papers (2024-10-24T06:47:28Z) - WeiPer: OOD Detection using Weight Perturbations of Class Projections [11.130659240045544]
We introduce perturbations of the class projections in the final fully connected layer which creates a richer representation of the input.
We achieve state-of-the-art OOD detection results across multiple benchmarks of the OpenOOD framework.
arXiv Detail & Related papers (2024-05-27T13:38:28Z) - Gradient-Regularized Out-of-Distribution Detection [28.542499196417214]
One of the challenges for neural networks in real-life applications is the overconfident errors these models make when the data is not from the original training distribution.
We propose the idea of leveraging the information embedded in the gradient of the loss function during training to enable the network to learn a desired OOD score for each sample.
We also develop a novel energy-based sampling method to allow the network to be exposed to more informative OOD samples during the training phase.
arXiv Detail & Related papers (2024-04-18T17:50:23Z) - Deep Neural Networks Tend To Extrapolate Predictably [51.303814412294514]
neural network predictions tend to be unpredictable and overconfident when faced with out-of-distribution (OOD) inputs.
We observe that neural network predictions often tend towards a constant value as input data becomes increasingly OOD.
We show how one can leverage our insights in practice to enable risk-sensitive decision-making in the presence of OOD inputs.
arXiv Detail & Related papers (2023-10-02T03:25:32Z) - Energy-based Out-of-Distribution Detection for Graph Neural Networks [76.0242218180483]
We propose a simple, powerful and efficient OOD detection model for GNN-based learning on graphs, which we call GNNSafe.
GNNSafe achieves up to $17.0%$ AUROC improvement over state-of-the-arts and it could serve as simple yet strong baselines in such an under-developed area.
arXiv Detail & Related papers (2023-02-06T16:38:43Z) - Breaking Down Out-of-Distribution Detection: Many Methods Based on OOD
Training Data Estimate a Combination of the Same Core Quantities [104.02531442035483]
The goal of this paper is to recognize common objectives as well as to identify the implicit scoring functions of different OOD detection methods.
We show that binary discrimination between in- and (different) out-distributions is equivalent to several distinct formulations of the OOD detection problem.
We also show that the confidence loss which is used by Outlier Exposure has an implicit scoring function which differs in a non-trivial fashion from the theoretically optimal scoring function.
arXiv Detail & Related papers (2022-06-20T16:32:49Z) - Igeood: An Information Geometry Approach to Out-of-Distribution
Detection [35.04325145919005]
We introduce Igeood, an effective method for detecting out-of-distribution (OOD) samples.
Igeood applies to any pre-trained neural network, works under various degrees of access to the machine learning model.
We show that Igeood outperforms competing state-of-the-art methods on a variety of network architectures and datasets.
arXiv Detail & Related papers (2022-03-15T11:26:35Z) - ReAct: Out-of-distribution Detection With Rectified Activations [20.792140933660075]
Out-of-distribution (OOD) detection has received much attention lately due to its practical importance.
One of the primary challenges is that models often produce highly confident predictions on OOD data.
We propose ReAct--a simple and effective technique for reducing model overconfidence on OOD data.
arXiv Detail & Related papers (2021-11-24T21:02:07Z) - Triggering Failures: Out-Of-Distribution detection by learning from
local adversarial attacks in Semantic Segmentation [76.2621758731288]
We tackle the detection of out-of-distribution (OOD) objects in semantic segmentation.
Our main contribution is a new OOD detection architecture called ObsNet associated with a dedicated training scheme based on Local Adversarial Attacks (LAA)
We show it obtains top performances both in speed and accuracy when compared to ten recent methods of the literature on three different datasets.
arXiv Detail & Related papers (2021-08-03T17:09:56Z) - Robust Out-of-distribution Detection for Neural Networks [51.19164318924997]
We show that existing detection mechanisms can be extremely brittle when evaluating on in-distribution and OOD inputs.
We propose an effective algorithm called ALOE, which performs robust training by exposing the model to both adversarially crafted inlier and outlier examples.
arXiv Detail & Related papers (2020-03-21T17:46:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.