Uncertainty in GNN Learning Evaluations: The Importance of a Consistent
Benchmark for Community Detection
- URL: http://arxiv.org/abs/2305.06026v5
- Date: Sat, 25 Nov 2023 18:57:51 GMT
- Title: Uncertainty in GNN Learning Evaluations: The Importance of a Consistent
Benchmark for Community Detection
- Authors: William Leeney, Ryan McConville
- Abstract summary: We propose a framework to establish a common evaluation protocol for Graph Neural Networks (GNNs)
We motivate and justify it by demonstrating the differences with and without the protocol.
We find that by ensuring the same evaluation criteria is followed, there may be significant differences from the reported performance of methods at this task.
- Score: 4.358468367889626
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Graph Neural Networks (GNNs) have improved unsupervised community detection
of clustered nodes due to their ability to encode the dual dimensionality of
the connectivity and feature information spaces of graphs. Identifying the
latent communities has many practical applications from social networks to
genomics. Current benchmarks of real world performance are confusing due to the
variety of decisions influencing the evaluation of GNNs at this task. To
address this, we propose a framework to establish a common evaluation protocol.
We motivate and justify it by demonstrating the differences with and without
the protocol. The W Randomness Coefficient is a metric proposed for assessing
the consistency of algorithm rankings to quantify the reliability of results
under the presence of randomness. We find that by ensuring the same evaluation
criteria is followed, there may be significant differences from the reported
performance of methods at this task, but a more complete evaluation and
comparison of methods is possible.
Related papers
- Enhancing Community Detection in Networks: A Comparative Analysis of Local Metrics and Hierarchical Algorithms [49.1574468325115]
This study employs the same method to evaluate the relevance of using local similarity metrics for community detection.
The efficacy of these metrics was evaluated by applying the base algorithm to several real networks with varying community sizes.
arXiv Detail & Related papers (2024-08-17T02:17:09Z) - Online GNN Evaluation Under Test-time Graph Distribution Shifts [92.4376834462224]
A new research problem, online GNN evaluation, aims to provide valuable insights into the well-trained GNNs's ability to generalize to real-world unlabeled graphs.
We develop an effective learning behavior discrepancy score, dubbed LeBeD, to estimate the test-time generalization errors of well-trained GNN models.
arXiv Detail & Related papers (2024-03-15T01:28:08Z) - Uncertainty in Graph Neural Networks: A Survey [50.63474656037679]
Graph Neural Networks (GNNs) have been extensively used in various real-world applications.
However, the predictive uncertainty of GNNs stemming from diverse sources can lead to unstable and erroneous predictions.
This survey aims to provide a comprehensive overview of the GNNs from the perspective of uncertainty.
arXiv Detail & Related papers (2024-03-11T21:54:52Z) - Accurate and Scalable Estimation of Epistemic Uncertainty for Graph
Neural Networks [40.95782849532316]
We propose a novel training framework designed to improve intrinsic GNN uncertainty estimates.
Our framework adapts the principle of centering data to graph data through novel graph anchoring strategies.
Our work provides insights into uncertainty estimation for GNNs, and demonstrates the utility of G-$Delta$UQ in obtaining reliable estimates.
arXiv Detail & Related papers (2024-01-07T00:58:33Z) - Uncertainty in GNN Learning Evaluations: A Comparison Between Measures
for Quantifying Randomness in GNN Community Detection [4.358468367889626]
Real-world benchmarks are perplexing due to the multitude of decisions influencing GNN evaluations.
$W$ Randomness coefficient, based on the Wasserstein distance, is identified as providing the most robust assessment of randomness.
arXiv Detail & Related papers (2023-12-14T15:06:29Z) - GNNEvaluator: Evaluating GNN Performance On Unseen Graphs Without Labels [81.93520935479984]
We study a new problem, GNN model evaluation, that aims to assess the performance of a specific GNN model trained on labeled and observed graphs.
We propose a two-stage GNN model evaluation framework, including (1) DiscGraph set construction and (2) GNNEvaluator training and inference.
Under the effective training supervision from the DiscGraph set, GNNEvaluator learns to precisely estimate node classification accuracy of the to-be-evaluated GNN model.
arXiv Detail & Related papers (2023-10-23T05:51:59Z) - Semi-supervised Community Detection via Structural Similarity Metrics [0.0]
We study a semi-supervised community detection problem in which the objective is to estimate the community label of a new node.
We propose an algorithm that computes a structural similarity metric' between the new node and each of the $K$ communities.
Our findings highlight, to the best of our knowledge, the first semi-supervised community detection algorithm that offers theoretical guarantees.
arXiv Detail & Related papers (2023-06-01T19:02:50Z) - Energy-based Out-of-Distribution Detection for Graph Neural Networks [76.0242218180483]
We propose a simple, powerful and efficient OOD detection model for GNN-based learning on graphs, which we call GNNSafe.
GNNSafe achieves up to $17.0%$ AUROC improvement over state-of-the-arts and it could serve as simple yet strong baselines in such an under-developed area.
arXiv Detail & Related papers (2023-02-06T16:38:43Z) - Task-Agnostic Graph Neural Network Evaluation via Adversarial
Collaboration [11.709808788756966]
GraphAC is a principled, task-agnostic, and stable framework for evaluating Graph Neural Network (GNN) research for molecular representation learning.
We introduce a novel objective function: the Competitive Barlow Twins, that allow two GNNs to jointly update themselves from direct competitions against each other.
arXiv Detail & Related papers (2023-01-27T03:33:11Z) - Implicit models, latent compression, intrinsic biases, and cheap lunches
in community detection [0.0]
Community detection aims to partition a network into clusters of nodes to summarize its large-scale structure.
Some community detection methods are inferential, explicitly deriving the clustering objective through a probabilistic generative model.
Other methods are descriptive, dividing a network according to an objective motivated by a particular application.
We present a solution that associates any community detection objective, inferential or descriptive, with its corresponding implicit network generative model.
arXiv Detail & Related papers (2022-10-17T15:38:41Z) - A Variational Edge Partition Model for Supervised Graph Representation
Learning [51.30365677476971]
This paper introduces a graph generative process to model how the observed edges are generated by aggregating the node interactions over a set of overlapping node communities.
We partition each edge into the summation of multiple community-specific weighted edges and use them to define community-specific GNNs.
A variational inference framework is proposed to jointly learn a GNN based inference network that partitions the edges into different communities, these community-specific GNNs, and a GNN based predictor that combines community-specific GNNs for the end classification task.
arXiv Detail & Related papers (2022-02-07T14:37:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.