Redundancy-Aware Test-Time Graph Out-of-Distribution Detection
- URL: http://arxiv.org/abs/2510.14562v1
- Date: Thu, 16 Oct 2025 11:14:45 GMT
- Title: Redundancy-Aware Test-Time Graph Out-of-Distribution Detection
- Authors: Yue Hou, He Zhu, Ruomei Liu, Yingke Su, Junran Wu, Ke Xu,
- Abstract summary: RedOUT is an unsupervised framework that integrates structural entropy into test-time OOD detection for graph classification.<n>Our method achieves an average improvement of 6.7%, significantly surpassing the best competitor by 17.3% on the ClinTox/LIPO dataset pair.
- Score: 20.560483914725435
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Distributional discrepancy between training and test data can lead models to make inaccurate predictions when encountering out-of-distribution (OOD) samples in real-world applications. Although existing graph OOD detection methods leverage data-centric techniques to extract effective representations, their performance remains compromised by structural redundancy that induces semantic shifts. To address this dilemma, we propose RedOUT, an unsupervised framework that integrates structural entropy into test-time OOD detection for graph classification. Concretely, we introduce the Redundancy-aware Graph Information Bottleneck (ReGIB) and decompose the objective into essential information and irrelevant redundancy. By minimizing structural entropy, the decoupled redundancy is reduced, and theoretically grounded upper and lower bounds are proposed for optimization. Extensive experiments on real-world datasets demonstrate the superior performance of RedOUT on OOD detection. Specifically, our method achieves an average improvement of 6.7%, significantly surpassing the best competitor by 17.3% on the ClinTox/LIPO dataset pair.
Related papers
- Learning to Explore: Policy-Guided Outlier Synthesis for Graph Out-of-Distribution Detection [51.93878677594561]
In unsupervised graph-level OOD detection, models are typically trained using only in-distribution (ID) data.<n>We propose a Policy-Guided Outlier Synthesis framework that replaces statics with a learned exploration strategy.
arXiv Detail & Related papers (2026-02-28T11:40:18Z) - Graph Out-of-Distribution Detection via Test-Time Calibration with Dual Dynamic Dictionaries [15.43092254877282]
Key challenge in graph out-of-distribution (OOD) detection lies in the absence of ground-truth OOD samples during training.<n>We propose a novel test-time graph OOD detection method, termed BaCa, that calibrates OOD scores using dual dynamically updated dictionaries.<n>BaCa significantly outperforms existing state-of-the-art methods in OOD detection.
arXiv Detail & Related papers (2025-11-17T16:14:48Z) - Structural Entropy Guided Unsupervised Graph Out-Of-Distribution Detection [11.217628543343855]
Unsupervised out-of-distribution (OOD) detection is vital for ensuring the reliability of graph neural networks (GNNs)<n>Existing methods often suffer from compromised performance due to redundant information in graph structures.<n>We propose SEGO, an unsupervised framework that integrates structural entropy into OOD detection.
arXiv Detail & Related papers (2025-03-05T07:47:57Z) - HGOE: Hybrid External and Internal Graph Outlier Exposure for Graph Out-of-Distribution Detection [78.47008997035158]
Graph data exhibits greater diversity but lower robustness to perturbations, complicating the integration of outliers.
We propose the introduction of textbfHybrid External and Internal textbfGraph textbfOutlier textbfExposure (HGOE) to improve graph OOD detection performance.
arXiv Detail & Related papers (2024-07-31T16:55:18Z) - Graph Out-of-Distribution Generalization with Controllable Data
Augmentation [51.17476258673232]
Graph Neural Network (GNN) has demonstrated extraordinary performance in classifying graph properties.
Due to the selection bias of training and testing data, distribution deviation is widespread.
We propose OOD calibration to measure the distribution deviation of virtual samples.
arXiv Detail & Related papers (2023-08-16T13:10:27Z) - Energy-based Out-of-Distribution Detection for Graph Neural Networks [76.0242218180483]
We propose a simple, powerful and efficient OOD detection model for GNN-based learning on graphs, which we call GNNSafe.
GNNSafe achieves up to $17.0%$ AUROC improvement over state-of-the-arts and it could serve as simple yet strong baselines in such an under-developed area.
arXiv Detail & Related papers (2023-02-06T16:38:43Z) - GOOD-D: On Unsupervised Graph Out-Of-Distribution Detection [67.90365841083951]
We develop a new graph contrastive learning framework GOOD-D for detecting OOD graphs without using any ground-truth labels.
GOOD-D is able to capture the latent ID patterns and accurately detect OOD graphs based on the semantic inconsistency in different granularities.
As a pioneering work in unsupervised graph-level OOD detection, we build a comprehensive benchmark to compare our proposed approach with different state-of-the-art methods.
arXiv Detail & Related papers (2022-11-08T12:41:58Z) - Optimal Propagation for Graph Neural Networks [51.08426265813481]
We propose a bi-level optimization approach for learning the optimal graph structure.
We also explore a low-rank approximation model for further reducing the time complexity.
arXiv Detail & Related papers (2022-05-06T03:37:00Z) - Negative Data Augmentation [127.28042046152954]
We show that negative data augmentation samples provide information on the support of the data distribution.
We introduce a new GAN training objective where we use NDA as an additional source of synthetic data for the discriminator.
Empirically, models trained with our method achieve improved conditional/unconditional image generation along with improved anomaly detection capabilities.
arXiv Detail & Related papers (2021-02-09T20:28:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.