HGOE: Hybrid External and Internal Graph Outlier Exposure for Graph Out-of-Distribution Detection
- URL: http://arxiv.org/abs/2407.21742v1
- Date: Wed, 31 Jul 2024 16:55:18 GMT
- Title: HGOE: Hybrid External and Internal Graph Outlier Exposure for Graph Out-of-Distribution Detection
- Authors: Junwei He, Qianqian Xu, Yangbangyan Jiang, Zitai Wang, Yuchen Sun, Qingming Huang,
- Abstract summary: Graph data exhibits greater diversity but lower robustness to perturbations, complicating the integration of outliers.
We propose the introduction of textbfHybrid External and Internal textbfGraph textbfOutlier textbfExposure (HGOE) to improve graph OOD detection performance.
- Score: 78.47008997035158
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the progressive advancements in deep graph learning, out-of-distribution (OOD) detection for graph data has emerged as a critical challenge. While the efficacy of auxiliary datasets in enhancing OOD detection has been extensively studied for image and text data, such approaches have not yet been explored for graph data. Unlike Euclidean data, graph data exhibits greater diversity but lower robustness to perturbations, complicating the integration of outliers. To tackle these challenges, we propose the introduction of \textbf{H}ybrid External and Internal \textbf{G}raph \textbf{O}utlier \textbf{E}xposure (HGOE) to improve graph OOD detection performance. Our framework involves using realistic external graph data from various domains and synthesizing internal outliers within ID subgroups to address the poor robustness and presence of OOD samples within the ID class. Furthermore, we develop a boundary-aware OE loss that adaptively assigns weights to outliers, maximizing the use of high-quality OOD samples while minimizing the impact of low-quality ones. Our proposed HGOE framework is model-agnostic and designed to enhance the effectiveness of existing graph OOD detection models. Experimental results demonstrate that our HGOE framework can significantly improve the performance of existing OOD detection models across all 8 real datasets.
Related papers
- Unifying Unsupervised Graph-Level Anomaly Detection and Out-of-Distribution Detection: A Benchmark [73.58840254552656]
Unsupervised graph-level anomaly detection (GLAD) and unsupervised graph-level out-of-distribution (OOD) detection have received significant attention in recent years.
We present a Unified Benchmark for unsupervised Graph-level OOD and anomaly Detection (our method)
Our benchmark encompasses 35 datasets spanning four practical anomaly and OOD detection scenarios.
We conduct multi-dimensional analyses to explore the effectiveness, generalizability, robustness, and efficiency of existing methods.
arXiv Detail & Related papers (2024-06-21T04:07:43Z) - GOODAT: Towards Test-time Graph Out-of-Distribution Detection [103.40396427724667]
Graph neural networks (GNNs) have found widespread application in modeling graph data across diverse domains.
Recent studies have explored graph OOD detection, often focusing on training a specific model or modifying the data on top of a well-trained GNN.
This paper introduces a data-centric, unsupervised, and plug-and-play solution that operates independently of training data and modifications of GNN architecture.
arXiv Detail & Related papers (2024-01-10T08:37:39Z) - Open-World Lifelong Graph Learning [7.535219325248997]
We study the problem of lifelong graph learning in an open-world scenario.
We utilize Out-of-Distribution (OOD) detection methods to recognize new classes.
We suggest performing new class detection by combining OOD detection methods with information aggregated from the graph neighborhood.
arXiv Detail & Related papers (2023-10-19T08:18:10Z) - Graph Structure and Feature Extrapolation for Out-of-Distribution Generalization [54.64375566326931]
Out-of-distribution (OOD) generalization deals with the prevalent learning scenario where test distribution shifts from training distribution.
We propose to achieve graph OOD generalization with the novel design of non-Euclidean-space linear extrapolation.
Our design tailors OOD samples for specific shifts without corrupting underlying causal mechanisms.
arXiv Detail & Related papers (2023-06-13T18:46:28Z) - Energy-based Out-of-Distribution Detection for Graph Neural Networks [76.0242218180483]
We propose a simple, powerful and efficient OOD detection model for GNN-based learning on graphs, which we call GNNSafe.
GNNSafe achieves up to $17.0%$ AUROC improvement over state-of-the-arts and it could serve as simple yet strong baselines in such an under-developed area.
arXiv Detail & Related papers (2023-02-06T16:38:43Z) - GOOD-D: On Unsupervised Graph Out-Of-Distribution Detection [67.90365841083951]
We develop a new graph contrastive learning framework GOOD-D for detecting OOD graphs without using any ground-truth labels.
GOOD-D is able to capture the latent ID patterns and accurately detect OOD graphs based on the semantic inconsistency in different granularities.
As a pioneering work in unsupervised graph-level OOD detection, we build a comprehensive benchmark to compare our proposed approach with different state-of-the-art methods.
arXiv Detail & Related papers (2022-11-08T12:41:58Z) - Training OOD Detectors in their Natural Habitats [31.565635192716712]
Out-of-distribution (OOD) detection is important for machine learning models deployed in the wild.
Recent methods use auxiliary outlier data to regularize the model for improved OOD detection.
We propose a novel framework that leverages wild mixture data -- that naturally consists of both ID and OOD samples.
arXiv Detail & Related papers (2022-02-07T15:38:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.