A Graph Is More Than Its Nodes: Towards Structured Uncertainty-Aware
Learning on Graphs
- URL: http://arxiv.org/abs/2210.15575v1
- Date: Thu, 27 Oct 2022 16:12:58 GMT
- Title: A Graph Is More Than Its Nodes: Towards Structured Uncertainty-Aware
Learning on Graphs
- Authors: Hans Hao-Hsun Hsu, Yuesong Shen, Daniel Cremers
- Abstract summary: We propose novel edgewise metrics, namely the edgewise expected calibration error (ECE) and the agree/disagree ECEs, which provide criteria for uncertainty estimation on graphs beyond the nodewise setting.
Our experiments demonstrate that the proposed edgewise metrics can complement the nodewise results and yield additional insights.
- Score: 49.76175970328538
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Current graph neural networks (GNNs) that tackle node classification on
graphs tend to only focus on nodewise scores and are solely evaluated by
nodewise metrics. This limits uncertainty estimation on graphs since nodewise
marginals do not fully characterize the joint distribution given the graph
structure. In this work, we propose novel edgewise metrics, namely the edgewise
expected calibration error (ECE) and the agree/disagree ECEs, which provide
criteria for uncertainty estimation on graphs beyond the nodewise setting. Our
experiments demonstrate that the proposed edgewise metrics can complement the
nodewise results and yield additional insights. Moreover, we show that GNN
models which consider the structured prediction problem on graphs tend to have
better uncertainty estimations, which illustrates the benefit of going beyond
the nodewise setting.
Related papers
- CUQ-GNN: Committee-based Graph Uncertainty Quantification using Posterior Networks [21.602569813024]
We study the influence of domain-specific characteristics when defining a meaningful notion of predictive uncertainty on graph data.
We propose a family of Committe-based Uncertainty Quantification Graph Neural Networks (CUQ-GNNs)
arXiv Detail & Related papers (2024-09-06T09:43:09Z) - Probability Passing for Graph Neural Networks: Graph Structure and Representations Joint Learning [8.392545965667288]
Graph Neural Networks (GNNs) have achieved notable success in the analysis of non-Euclidean data across a wide range of domains.
To solve this problem, Latent Graph Inference (LGI) is proposed to infer a task-specific latent structure by computing similarity or edge probability of node features.
We introduce a novel method called Probability Passing to refine the generated graph structure by aggregating edge probabilities of neighboring nodes.
arXiv Detail & Related papers (2024-07-15T13:01:47Z) - GNNEvaluator: Evaluating GNN Performance On Unseen Graphs Without Labels [81.93520935479984]
We study a new problem, GNN model evaluation, that aims to assess the performance of a specific GNN model trained on labeled and observed graphs.
We propose a two-stage GNN model evaluation framework, including (1) DiscGraph set construction and (2) GNNEvaluator training and inference.
Under the effective training supervision from the DiscGraph set, GNNEvaluator learns to precisely estimate node classification accuracy of the to-be-evaluated GNN model.
arXiv Detail & Related papers (2023-10-23T05:51:59Z) - BOURNE: Bootstrapped Self-supervised Learning Framework for Unified
Graph Anomaly Detection [50.26074811655596]
We propose a novel unified graph anomaly detection framework based on bootstrapped self-supervised learning (named BOURNE)
By swapping the context embeddings between nodes and edges, we enable the mutual detection of node and edge anomalies.
BOURNE can eliminate the need for negative sampling, thereby enhancing its efficiency in handling large graphs.
arXiv Detail & Related papers (2023-07-28T00:44:57Z) - DEGREE: Decomposition Based Explanation For Graph Neural Networks [55.38873296761104]
We propose DEGREE to provide a faithful explanation for GNN predictions.
By decomposing the information generation and aggregation mechanism of GNNs, DEGREE allows tracking the contributions of specific components of the input graph to the final prediction.
We also design a subgraph level interpretation algorithm to reveal complex interactions between graph nodes that are overlooked by previous methods.
arXiv Detail & Related papers (2023-05-22T10:29:52Z) - The Graphical Nadaraya-Watson Estimator on Latent Position Models [0.0]
We are interested in the quality of the averaging estimator which for an unlabeled node predicts the average of the observations of its labeled neighbors.
While the estimator itself is very simple we believe that our results will contribute towards the theoretical understanding of learning on graphs through more sophisticated methods such as Graph Neural Networks.
arXiv Detail & Related papers (2023-03-30T08:56:28Z) - ResNorm: Tackling Long-tailed Degree Distribution Issue in Graph Neural
Networks via Normalization [80.90206641975375]
This paper focuses on improving the performance of GNNs via normalization.
By studying the long-tailed distribution of node degrees in the graph, we propose a novel normalization method for GNNs.
The $scale$ operation of ResNorm reshapes the node-wise standard deviation (NStd) distribution so as to improve the accuracy of tail nodes.
arXiv Detail & Related papers (2022-06-16T13:49:09Z) - Graph Posterior Network: Bayesian Predictive Uncertainty for Node
Classification [37.86338466089894]
Uncertainty estimation for non-independent node-level predictions is under-explored.
We propose a new model Graph Posterior Network (GPN) which explicitly performs Bayesian posterior updates for predictions on nodes.
GPN outperforms existing approaches for uncertainty estimation in the experiments.
arXiv Detail & Related papers (2021-10-26T20:41:20Z) - Graph Entropy Guided Node Embedding Dimension Selection for Graph Neural
Networks [74.26734952400925]
We propose a novel Minimum Graph Entropy (MinGE) algorithm for Node Embedding Dimension Selection (NEDS)
MinGE considers both feature entropy and structure entropy on graphs, which are carefully designed according to the characteristics of the rich information in them.
Experiments with popular Graph Neural Networks (GNNs) on benchmark datasets demonstrate the effectiveness and generalizability of our proposed MinGE.
arXiv Detail & Related papers (2021-05-07T11:40:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.