GOOD-D: On Unsupervised Graph Out-Of-Distribution Detection
- URL: http://arxiv.org/abs/2211.04208v1
- Date: Tue, 8 Nov 2022 12:41:58 GMT
- Title: GOOD-D: On Unsupervised Graph Out-Of-Distribution Detection
- Authors: Yixin Liu, Kaize Ding, Huan Liu, Shirui Pan
- Abstract summary: We develop a new graph contrastive learning framework GOOD-D for detecting OOD graphs without using any ground-truth labels.
GOOD-D is able to capture the latent ID patterns and accurately detect OOD graphs based on the semantic inconsistency in different granularities.
As a pioneering work in unsupervised graph-level OOD detection, we build a comprehensive benchmark to compare our proposed approach with different state-of-the-art methods.
- Score: 67.90365841083951
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Most existing deep learning models are trained based on the closed-world
assumption, where the test data is assumed to be drawn i.i.d. from the same
distribution as the training data, known as in-distribution (ID). However, when
models are deployed in an open-world scenario, test samples can be
out-of-distribution (OOD) and therefore should be handled with caution. To
detect such OOD samples drawn from unknown distribution, OOD detection has
received increasing attention lately. However, current endeavors mostly focus
on grid-structured data and its application for graph-structured data remains
under-explored. Considering the fact that data labeling on graphs is commonly
time-expensive and labor-intensive, in this work we study the problem of
unsupervised graph OOD detection, aiming at detecting OOD graphs solely based
on unlabeled ID data. To achieve this goal, we develop a new graph contrastive
learning framework GOOD-D for detecting OOD graphs without using any
ground-truth labels. By performing hierarchical contrastive learning on the
augmented graphs generated by our perturbation-free graph data augmentation
method, GOOD-D is able to capture the latent ID patterns and accurately detect
OOD graphs based on the semantic inconsistency in different granularities
(i.e., node-level, graph-level, and group-level). As a pioneering work in
unsupervised graph-level OOD detection, we build a comprehensive benchmark to
compare our proposed approach with different state-of-the-art methods. The
experiment results demonstrate the superiority of our approach over different
methods on various datasets.
Related papers
- GLIP-OOD: Zero-Shot Graph OOD Detection with Foundation Model [43.848482407777766]
Out-of-distribution (OOD) detection is critical for ensuring the safety and reliability of machine learning systems.
In this work, we take the first step toward enabling zero-shot graph OOD detection by leveraging a graph foundation model (GFM)
We introduce GLIP-OOD, a novel framework that employs LLMs to generate semantically informative pseudo-OOD labels from unlabeled data.
Our approach is the first to enable node-level graph OOD detection in a fully zero-shot setting, and achieves state-of-the-art performance on four benchmark text-attributed graph datasets.
arXiv Detail & Related papers (2025-04-29T21:42:54Z) - Structural Entropy Guided Unsupervised Graph Out-Of-Distribution Detection [11.217628543343855]
Unsupervised out-of-distribution (OOD) detection is vital for ensuring the reliability of graph neural networks (GNNs)
Existing methods often suffer from compromised performance due to redundant information in graph structures.
We propose SEGO, an unsupervised framework that integrates structural entropy into OOD detection.
arXiv Detail & Related papers (2025-03-05T07:47:57Z) - A Survey of Deep Graph Learning under Distribution Shifts: from Graph Out-of-Distribution Generalization to Adaptation [59.14165404728197]
We provide an up-to-date and forward-looking review of deep graph learning under distribution shifts.
Specifically, we cover three primary scenarios: graph OOD generalization, training-time graph OOD adaptation, and test-time graph OOD adaptation.
To provide a better understanding of the literature, we introduce a systematic taxonomy that classifies existing methods into model-centric and data-centric approaches.
arXiv Detail & Related papers (2024-10-25T02:39:56Z) - HGOE: Hybrid External and Internal Graph Outlier Exposure for Graph Out-of-Distribution Detection [78.47008997035158]
Graph data exhibits greater diversity but lower robustness to perturbations, complicating the integration of outliers.
We propose the introduction of textbfHybrid External and Internal textbfGraph textbfOutlier textbfExposure (HGOE) to improve graph OOD detection performance.
arXiv Detail & Related papers (2024-07-31T16:55:18Z) - Unifying Unsupervised Graph-Level Anomaly Detection and Out-of-Distribution Detection: A Benchmark [73.58840254552656]
Unsupervised graph-level anomaly detection (GLAD) and unsupervised graph-level out-of-distribution (OOD) detection have received significant attention in recent years.
We present a Unified Benchmark for unsupervised Graph-level OOD and anomaly Detection (our method)
Our benchmark encompasses 35 datasets spanning four practical anomaly and OOD detection scenarios.
We conduct multi-dimensional analyses to explore the effectiveness, generalizability, robustness, and efficiency of existing methods.
arXiv Detail & Related papers (2024-06-21T04:07:43Z) - GOODAT: Towards Test-time Graph Out-of-Distribution Detection [103.40396427724667]
Graph neural networks (GNNs) have found widespread application in modeling graph data across diverse domains.
Recent studies have explored graph OOD detection, often focusing on training a specific model or modifying the data on top of a well-trained GNN.
This paper introduces a data-centric, unsupervised, and plug-and-play solution that operates independently of training data and modifications of GNN architecture.
arXiv Detail & Related papers (2024-01-10T08:37:39Z) - EAT: Towards Long-Tailed Out-of-Distribution Detection [55.380390767978554]
This paper addresses the challenging task of long-tailed OOD detection.
The main difficulty lies in distinguishing OOD data from samples belonging to the tail classes.
We propose two simple ideas: (1) Expanding the in-distribution class space by introducing multiple abstention classes, and (2) Augmenting the context-limited tail classes by overlaying images onto the context-rich OOD data.
arXiv Detail & Related papers (2023-12-14T13:47:13Z) - Open-World Lifelong Graph Learning [7.535219325248997]
We study the problem of lifelong graph learning in an open-world scenario.
We utilize Out-of-Distribution (OOD) detection methods to recognize new classes.
We suggest performing new class detection by combining OOD detection methods with information aggregated from the graph neighborhood.
arXiv Detail & Related papers (2023-10-19T08:18:10Z) - SGOOD: Substructure-enhanced Graph-Level Out-of-Distribution Detection [13.734411226834327]
We present SGOOD, a graph-level OOD detection framework.
We find that substructure differences commonly exist between ID and OOD graphs, and design SGOOD with a series of techniques to encode task-agnostic substructures for effective OOD detection.
Experiments against 11 competitors on numerous graph datasets demonstrate the superiority of SGOOD, often surpassing existing methods by a significant margin.
arXiv Detail & Related papers (2023-10-16T09:51:24Z) - Energy-based Out-of-Distribution Detection for Graph Neural Networks [76.0242218180483]
We propose a simple, powerful and efficient OOD detection model for GNN-based learning on graphs, which we call GNNSafe.
GNNSafe achieves up to $17.0%$ AUROC improvement over state-of-the-arts and it could serve as simple yet strong baselines in such an under-developed area.
arXiv Detail & Related papers (2023-02-06T16:38:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.