GAIA: Delving into Gradient-based Attribution Abnormality for
Out-of-distribution Detection
- URL: http://arxiv.org/abs/2311.09620v2
- Date: Tue, 16 Jan 2024 12:26:08 GMT
- Title: GAIA: Delving into Gradient-based Attribution Abnormality for
Out-of-distribution Detection
- Authors: Jinggang Chen, Junjie Li, Xiaoyang Qu, Jianzong Wang, Jiguang Wan,
Jing Xiao
- Abstract summary: We offer an innovative perspective on quantifying the disparities between in-distribution (ID) and out-of-distribution (OOD) data.
We introduce two forms of abnormalities for OOD detection: the zero-deflation abnormality and the channel-wise average abnormality.
The effectiveness of GAIA is validated on both commonly utilized (CIFAR) and large-scale (ImageNet-1k) benchmarks.
- Score: 40.07502368794068
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Detecting out-of-distribution (OOD) examples is crucial to guarantee the
reliability and safety of deep neural networks in real-world settings. In this
paper, we offer an innovative perspective on quantifying the disparities
between in-distribution (ID) and OOD data -- analyzing the uncertainty that
arises when models attempt to explain their predictive decisions. This
perspective is motivated by our observation that gradient-based attribution
methods encounter challenges in assigning feature importance to OOD data,
thereby yielding divergent explanation patterns. Consequently, we investigate
how attribution gradients lead to uncertain explanation outcomes and introduce
two forms of abnormalities for OOD detection: the zero-deflation abnormality
and the channel-wise average abnormality. We then propose GAIA, a simple and
effective approach that incorporates Gradient Abnormality Inspection and
Aggregation. The effectiveness of GAIA is validated on both commonly utilized
(CIFAR) and large-scale (ImageNet-1k) benchmarks. Specifically, GAIA reduces
the average FPR95 by 23.10% on CIFAR10 and by 45.41% on CIFAR100 compared to
advanced post-hoc methods.
Related papers
- Typicalness-Aware Learning for Failure Detection [26.23185979968123]
Deep neural networks (DNNs) often suffer from the overconfidence issue, where incorrect predictions are made with high confidence scores.
We propose a novel approach called Typicalness-Aware Learning (TAL) to address this issue and improve failure detection performance.
arXiv Detail & Related papers (2024-11-04T11:09:47Z) - DSDE: Using Proportion Estimation to Improve Model Selection for Out-of-Distribution Detection [15.238164468992148]
Experimental results on CIFAR10 and CIFAR100 demonstrate the effectiveness of our approach in tackling OoD detection challenges.
We name the proposed approach as DOS-Storey-based Detector Ensemble (DSDE)
arXiv Detail & Related papers (2024-11-03T09:01:36Z) - LINe: Out-of-Distribution Detection by Leveraging Important Neurons [15.797257361788812]
We introduce a new aspect for analyzing the difference in model outputs between in-distribution data and OOD data.
We propose a novel method, Leveraging Important Neurons (LINe), for post-hoc Out of distribution detection.
arXiv Detail & Related papers (2023-03-24T13:49:05Z) - Energy-based Out-of-Distribution Detection for Graph Neural Networks [76.0242218180483]
We propose a simple, powerful and efficient OOD detection model for GNN-based learning on graphs, which we call GNNSafe.
GNNSafe achieves up to $17.0%$ AUROC improvement over state-of-the-arts and it could serve as simple yet strong baselines in such an under-developed area.
arXiv Detail & Related papers (2023-02-06T16:38:43Z) - Free Lunch for Generating Effective Outlier Supervision [46.37464572099351]
We propose an ultra-effective method to generate near-realistic outlier supervision.
Our proposed textttBayesAug significantly reduces the false positive rate over 12.50% compared with the previous schemes.
arXiv Detail & Related papers (2023-01-17T01:46:45Z) - Exploring Covariate and Concept Shift for Detection and Calibration of
Out-of-Distribution Data [77.27338842609153]
characterization reveals that sensitivity to each type of shift is important to the detection and confidence calibration of OOD data.
We propose a geometrically-inspired method to improve OOD detection under both shifts with only in-distribution data.
We are the first to propose a method that works well across both OOD detection and calibration and under different types of shifts.
arXiv Detail & Related papers (2021-10-28T15:42:55Z) - Unlabelled Data Improves Bayesian Uncertainty Calibration under
Covariate Shift [100.52588638477862]
We develop an approximate Bayesian inference scheme based on posterior regularisation.
We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.
arXiv Detail & Related papers (2020-06-26T13:50:19Z) - Robust Out-of-distribution Detection for Neural Networks [51.19164318924997]
We show that existing detection mechanisms can be extremely brittle when evaluating on in-distribution and OOD inputs.
We propose an effective algorithm called ALOE, which performs robust training by exposing the model to both adversarially crafted inlier and outlier examples.
arXiv Detail & Related papers (2020-03-21T17:46:28Z) - Towards Out-of-Distribution Detection with Divergence Guarantee in Deep
Generative Models [22.697643259435115]
Deep generative models may assign higher likelihood to out-of-distribution (OOD) data than in-distribution (ID) data.
We prove theorems to investigate the divergences in flow-based model.
We propose two group anomaly detection methods.
arXiv Detail & Related papers (2020-02-09T09:54:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.