Advancing Out-of-Distribution Detection through Data Purification and
Dynamic Activation Function Design
- URL: http://arxiv.org/abs/2403.03412v1
- Date: Wed, 6 Mar 2024 02:39:22 GMT
- Title: Advancing Out-of-Distribution Detection through Data Purification and
Dynamic Activation Function Design
- Authors: Yingrui Ji, Yao Zhu, Zhigang Li, Jiansheng Chen, Yunlong Kong and
Jingbo Chen
- Abstract summary: We introduce OOD-R (Out-of-Distribution-Rectified), a meticulously curated collection of open-source datasets with enhanced noise reduction properties.
OOD-R incorporates noise filtering technologies to refine the datasets, ensuring a more accurate and reliable evaluation of OOD detection algorithms.
We present ActFun, an innovative method that fine-tunes the model's response to diverse inputs, thereby improving the stability of feature extraction.
- Score: 12.45245390882047
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In the dynamic realms of machine learning and deep learning, the robustness
and reliability of models are paramount, especially in critical real-world
applications. A fundamental challenge in this sphere is managing
Out-of-Distribution (OOD) samples, significantly increasing the risks of model
misclassification and uncertainty. Our work addresses this challenge by
enhancing the detection and management of OOD samples in neural networks. We
introduce OOD-R (Out-of-Distribution-Rectified), a meticulously curated
collection of open-source datasets with enhanced noise reduction properties.
In-Distribution (ID) noise in existing OOD datasets can lead to inaccurate
evaluation of detection algorithms. Recognizing this, OOD-R incorporates noise
filtering technologies to refine the datasets, ensuring a more accurate and
reliable evaluation of OOD detection algorithms. This approach not only
improves the overall quality of data but also aids in better distinguishing
between OOD and ID samples, resulting in up to a 2.5\% improvement in model
accuracy and a minimum 3.2\% reduction in false positives. Furthermore, we
present ActFun, an innovative method that fine-tunes the model's response to
diverse inputs, thereby improving the stability of feature extraction and
minimizing specificity issues. ActFun addresses the common problem of model
overconfidence in OOD detection by strategically reducing the influence of
hidden units, which enhances the model's capability to estimate OOD uncertainty
more accurately. Implementing ActFun in the OOD-R dataset has led to
significant performance enhancements, including an 18.42\% increase in AUROC of
the GradNorm method and a 16.93\% decrease in FPR95 of the Energy method.
Overall, our research not only advances the methodologies in OOD detection but
also emphasizes the importance of dataset integrity for accurate algorithm
evaluation.
Related papers
- LINe: Out-of-Distribution Detection by Leveraging Important Neurons [15.797257361788812]
We introduce a new aspect for analyzing the difference in model outputs between in-distribution data and OOD data.
We propose a novel method, Leveraging Important Neurons (LINe), for post-hoc Out of distribution detection.
arXiv Detail & Related papers (2023-03-24T13:49:05Z) - Out-of-distribution Detection with Implicit Outlier Transformation [72.73711947366377]
Outlier exposure (OE) is powerful in out-of-distribution (OOD) detection.
We propose a novel OE-based approach that makes the model perform well for unseen OOD situations.
arXiv Detail & Related papers (2023-03-09T04:36:38Z) - Energy-based Out-of-Distribution Detection for Graph Neural Networks [76.0242218180483]
We propose a simple, powerful and efficient OOD detection model for GNN-based learning on graphs, which we call GNNSafe.
GNNSafe achieves up to $17.0%$ AUROC improvement over state-of-the-arts and it could serve as simple yet strong baselines in such an under-developed area.
arXiv Detail & Related papers (2023-02-06T16:38:43Z) - Diffusion Denoising Process for Perceptron Bias in Out-of-distribution
Detection [67.49587673594276]
We introduce a new perceptron bias assumption that suggests discriminator models are more sensitive to certain features of the input, leading to the overconfidence problem.
We demonstrate that the diffusion denoising process (DDP) of DMs serves as a novel form of asymmetric, which is well-suited to enhance the input and mitigate the overconfidence problem.
Our experiments on CIFAR10, CIFAR100, and ImageNet show that our method outperforms SOTA approaches.
arXiv Detail & Related papers (2022-11-21T08:45:08Z) - Models Out of Line: A Fourier Lens on Distribution Shift Robustness [29.12208822285158]
Improving accuracy of deep neural networks (DNNs) on out-of-distribution (OOD) data is critical to an acceptance of deep learning (DL) in real world applications.
Recently, some promising approaches have been developed to improve OOD robustness.
There still is no clear understanding of the conditions on OOD data and model properties that are required to observe effective robustness.
arXiv Detail & Related papers (2022-07-08T18:05:58Z) - ReAct: Out-of-distribution Detection With Rectified Activations [20.792140933660075]
Out-of-distribution (OOD) detection has received much attention lately due to its practical importance.
One of the primary challenges is that models often produce highly confident predictions on OOD data.
We propose ReAct--a simple and effective technique for reducing model overconfidence on OOD data.
arXiv Detail & Related papers (2021-11-24T21:02:07Z) - On the Impact of Spurious Correlation for Out-of-distribution Detection [14.186776881154127]
We present a new formalization and model the data shifts by taking into account both the invariant and environmental features.
Our results suggest that the detection performance is severely worsened when the correlation between spurious features and labels is increased in the training set.
arXiv Detail & Related papers (2021-09-12T23:58:17Z) - Provably Robust Detection of Out-of-distribution Data (almost) for free [124.14121487542613]
Deep neural networks are known to produce highly overconfident predictions on out-of-distribution (OOD) data.
In this paper we propose a novel method where from first principles we combine a certifiable OOD detector with a standard classifier into an OOD aware classifier.
In this way we achieve the best of two worlds: certifiably adversarially robust OOD detection, even for OOD samples close to the in-distribution, without loss in prediction accuracy and close to state-of-the-art OOD detection performance for non-manipulated OOD data.
arXiv Detail & Related papers (2021-06-08T11:40:49Z) - ATOM: Robustifying Out-of-distribution Detection Using Outlier Mining [51.19164318924997]
Adrial Training with informative Outlier Mining improves robustness of OOD detection.
ATOM achieves state-of-the-art performance under a broad family of classic and adversarial OOD evaluation tasks.
arXiv Detail & Related papers (2020-06-26T20:58:05Z) - Robust Out-of-distribution Detection for Neural Networks [51.19164318924997]
We show that existing detection mechanisms can be extremely brittle when evaluating on in-distribution and OOD inputs.
We propose an effective algorithm called ALOE, which performs robust training by exposing the model to both adversarially crafted inlier and outlier examples.
arXiv Detail & Related papers (2020-03-21T17:46:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.