G-OSR: A Comprehensive Benchmark for Graph Open-Set Recognition
- URL: http://arxiv.org/abs/2503.00476v1
- Date: Sat, 01 Mar 2025 13:02:47 GMT
- Title: G-OSR: A Comprehensive Benchmark for Graph Open-Set Recognition
- Authors: Yicong Dong, Rundong He, Guangyao Chen, Wentao Zhang, Zhongyi Han, Jieming Shi, Yilong Yin,
- Abstract summary: We introduce textbfG-OSR, a benchmark for evaluating Graph Open-Set Recognition (GOSR) methods at both the node and graph levels.<n>Results offer critical insights into the generalizability and limitations of current GOSR methods.
- Score: 54.45837774534411
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph Neural Networks (GNNs) have achieved significant success in machine learning, with wide applications in social networks, bioinformatics, knowledge graphs, and other fields. Most research assumes ideal closed-set environments. However, in real-world open-set environments, graph learning models face challenges in robustness and reliability due to unseen classes. This highlights the need for Graph Open-Set Recognition (GOSR) methods to address these issues and ensure effective GNN application in practical scenarios. Research in GOSR is in its early stages, with a lack of a comprehensive benchmark spanning diverse tasks and datasets to evaluate methods. Moreover, traditional methods, Graph Out-of-Distribution Detection (GOODD), GOSR, and Graph Anomaly Detection (GAD) have mostly evolved in isolation, with little exploration of their interconnections or potential applications to GOSR. To fill these gaps, we introduce \textbf{G-OSR}, a comprehensive benchmark for evaluating GOSR methods at both the node and graph levels, using datasets from multiple domains to ensure fair and standardized comparisons of effectiveness and efficiency across traditional, GOODD, GOSR, and GAD methods. The results offer critical insights into the generalizability and limitations of current GOSR methods and provide valuable resources for advancing research in this field through systematic analysis of diverse approaches.
Related papers
- In-depth Analysis of Graph-based RAG in a Unified Framework [17.941941997783267]
Graph-based Retrieval-Augmented Generation (RAG) has proven effective in integrating external knowledge into large language models.
We first summarize a unified framework to incorporate all graph-based RAG methods from a high-level perspective.
We then extensively compare representative graph-based RAG methods over a range of questing-answering (QA) datasets.
arXiv Detail & Related papers (2025-03-06T11:34:49Z) - Out-of-Distribution Detection on Graphs: A Survey [58.47395497985277]
Graph out-of-distribution (GOOD) detection focuses on identifying graph data that deviates from the distribution seen during training.<n>We categorize existing methods into four types: enhancement-based, reconstruction-based, information propagation-based, and classification-based approaches.<n>We discuss practical applications and theoretical foundations, highlighting the unique challenges posed by graph data.
arXiv Detail & Related papers (2025-02-12T04:07:12Z) - Revisiting Graph Neural Networks on Graph-level Tasks: Comprehensive Experiments, Analysis, and Improvements [54.006506479865344]
We propose a unified evaluation framework for graph-level Graph Neural Networks (GNNs)<n>This framework provides a standardized setting to evaluate GNNs across diverse datasets.<n>We also propose a novel GNN model with enhanced expressivity and generalization capabilities.
arXiv Detail & Related papers (2025-01-01T08:48:53Z) - Deep Graph Anomaly Detection: A Survey and New Perspectives [86.84201183954016]
Graph anomaly detection (GAD) aims to identify unusual graph instances (nodes, edges, subgraphs, or graphs)
Deep learning approaches, graph neural networks (GNNs) in particular, have been emerging as a promising paradigm for GAD.
arXiv Detail & Related papers (2024-09-16T03:05:11Z) - Towards Graph Prompt Learning: A Survey and Beyond [38.55555996765227]
Large-scale "pre-train and prompt learning" paradigms have demonstrated remarkable adaptability.
This survey categorizes over 100 relevant works in this field, summarizing general design principles and the latest applications.
arXiv Detail & Related papers (2024-08-26T06:36:42Z) - Uncertainty in GNN Learning Evaluations: The Importance of a Consistent
Benchmark for Community Detection [4.358468367889626]
We propose a framework to establish a common evaluation protocol for Graph Neural Networks (GNNs)
We motivate and justify it by demonstrating the differences with and without the protocol.
We find that by ensuring the same evaluation criteria is followed, there may be significant differences from the reported performance of methods at this task.
arXiv Detail & Related papers (2023-05-10T10:22:28Z) - Benchmarking Node Outlier Detection on Graphs [90.29966986023403]
Graph outlier detection is an emerging but crucial machine learning task with numerous applications.
We present the first comprehensive unsupervised node outlier detection benchmark for graphs called UNOD.
arXiv Detail & Related papers (2022-06-21T01:46:38Z) - Graph Learning based Recommender Systems: A Review [111.43249652335555]
Graph Learning based Recommender Systems (GLRS) employ advanced graph learning approaches to model users' preferences and intentions as well as items' characteristics for recommendations.
We provide a systematic review of GLRS, by discussing how they extract important knowledge from graph-based representations to improve the accuracy, reliability and explainability of the recommendations.
arXiv Detail & Related papers (2021-05-13T14:50:45Z) - Quantifying Challenges in the Application of Graph Representation
Learning [0.0]
We provide an application oriented perspective to a set of popular embedding approaches.
We evaluate their representational power with respect to real-world graph properties.
Our results suggest that "one-to-fit-all" GRL approaches are hard to define in real-world scenarios.
arXiv Detail & Related papers (2020-06-18T03:19:43Z) - Graph Neighborhood Attentive Pooling [0.5493410630077189]
Network representation learning (NRL) is a powerful technique for learning low-dimensional vector representation of high-dimensional and sparse graphs.
We propose a novel context-sensitive algorithm called GAP that learns to attend on different parts of a node's neighborhood using attentive pooling networks.
arXiv Detail & Related papers (2020-01-28T15:05:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.