Uncertainty Propagation in Node Classification
- URL: http://arxiv.org/abs/2304.00918v1
- Date: Mon, 3 Apr 2023 12:18:23 GMT
- Title: Uncertainty Propagation in Node Classification
- Authors: Zhao Xu, Carolin Lawrence, Ammar Shaker, Raman Siarheyeu
- Abstract summary: We focus on measuring uncertainty of graph neural networks (GNNs) for the task of node classification.
We propose a Bayesian uncertainty propagation (BUP) method, which embeds GNNs in a Bayesian modeling framework.
We present an uncertainty oriented loss for node classification that allows the GNNs to clearly integrate predictive uncertainty in learning procedure.
- Score: 9.03984964980373
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Quantifying predictive uncertainty of neural networks has recently attracted
increasing attention. In this work, we focus on measuring uncertainty of graph
neural networks (GNNs) for the task of node classification. Most existing GNNs
model message passing among nodes. The messages are often deterministic.
Questions naturally arise: Does there exist uncertainty in the messages? How
could we propagate such uncertainty over a graph together with messages? To
address these issues, we propose a Bayesian uncertainty propagation (BUP)
method, which embeds GNNs in a Bayesian modeling framework, and models
predictive uncertainty of node classification with Bayesian confidence of
predictive probability and uncertainty of messages. Our method proposes a novel
uncertainty propagation mechanism inspired by Gaussian models. Moreover, we
present an uncertainty oriented loss for node classification that allows the
GNNs to clearly integrate predictive uncertainty in learning procedure.
Consequently, the training examples with large predictive uncertainty will be
penalized. We demonstrate the BUP with respect to prediction reliability and
out-of-distribution (OOD) predictions. The learned uncertainty is also analyzed
in depth. The relations between uncertainty and graph topology, as well as
predictive uncertainty in the OOD cases are investigated with extensive
experiments. The empirical results with popular benchmark datasets demonstrate
the superior performance of the proposed method.
Related papers
- Uncertainty in Graph Neural Networks: A Survey [50.63474656037679]
Graph Neural Networks (GNNs) have been extensively used in various real-world applications.
However, the predictive uncertainty of GNNs stemming from diverse sources can lead to unstable and erroneous predictions.
This survey aims to provide a comprehensive overview of the GNNs from the perspective of uncertainty.
arXiv Detail & Related papers (2024-03-11T21:54:52Z) - Uncertainty of Feed Forward Neural Networks Recognizing Quantum
Contextuality [2.5665227681407243]
A powerful technique for estimating both the accuracy and the uncertainty is provided by Bayesian neural networks (BNNs)
We show how BNNs can highlight their ability of reliable uncertainty estimation even after training with biased data sets.
arXiv Detail & Related papers (2022-12-27T17:33:46Z) - A General Framework for quantifying Aleatoric and Epistemic uncertainty
in Graph Neural Networks [0.29494468099506893]
Graph Neural Networks (GNN) provide a powerful framework that elegantly integrates Graph theory with Machine learning.
We consider the problem of quantifying the uncertainty in predictions of GNN stemming from modeling errors and measurement uncertainty.
We propose a unified approach to treat both sources of uncertainty in a Bayesian framework.
arXiv Detail & Related papers (2022-05-20T05:25:40Z) - NUQ: Nonparametric Uncertainty Quantification for Deterministic Neural
Networks [151.03112356092575]
We show the principled way to measure the uncertainty of predictions for a classifier based on Nadaraya-Watson's nonparametric estimate of the conditional label distribution.
We demonstrate the strong performance of the method in uncertainty estimation tasks on a variety of real-world image datasets.
arXiv Detail & Related papers (2022-02-07T12:30:45Z) - Dense Uncertainty Estimation via an Ensemble-based Conditional Latent
Variable Model [68.34559610536614]
We argue that the aleatoric uncertainty is an inherent attribute of the data and can only be correctly estimated with an unbiased oracle model.
We propose a new sampling and selection strategy at train time to approximate the oracle model for aleatoric uncertainty estimation.
Our results show that our solution achieves both accurate deterministic results and reliable uncertainty estimation.
arXiv Detail & Related papers (2021-11-22T08:54:10Z) - Dense Uncertainty Estimation [62.23555922631451]
In this paper, we investigate neural networks and uncertainty estimation techniques to achieve both accurate deterministic prediction and reliable uncertainty estimation.
We work on two types of uncertainty estimations solutions, namely ensemble based methods and generative model based methods, and explain their pros and cons while using them in fully/semi/weakly-supervised framework.
arXiv Detail & Related papers (2021-10-13T01:23:48Z) - Improving Uncertainty Calibration via Prior Augmented Data [56.88185136509654]
Neural networks have proven successful at learning from complex data distributions by acting as universal function approximators.
They are often overconfident in their predictions, which leads to inaccurate and miscalibrated probabilistic predictions.
We propose a solution by seeking out regions of feature space where the model is unjustifiably overconfident, and conditionally raising the entropy of those predictions towards that of the prior distribution of the labels.
arXiv Detail & Related papers (2021-02-22T07:02:37Z) - Uncertainty Aware Semi-Supervised Learning on Graph Data [18.695343563823798]
We propose a multi-source uncertainty framework using a graph neural network (GNN) for node classification predictions.
By collecting evidence from the labels of training nodes, the Graph-based Kernel Dirichlet distribution Estimation (GKDE) method is designed for accurately predicting node-level Dirichlet distributions.
We found that dissonance-based detection yielded the best results on misclassification detection while vacuity-based detection was the best for OOD detection.
arXiv Detail & Related papers (2020-10-24T04:56:46Z) - Probabilistic Neighbourhood Component Analysis: Sample Efficient
Uncertainty Estimation in Deep Learning [25.8227937350516]
We show that uncertainty estimation capability of state-of-the-art BNNs and Deep Ensemble models degrades significantly when the amount of training data is small.
We propose a probabilistic generalization of the popular sample-efficient non-parametric kNN approach.
Our approach enables deep kNN to accurately quantify underlying uncertainties in its prediction.
arXiv Detail & Related papers (2020-07-18T21:36:31Z) - Unlabelled Data Improves Bayesian Uncertainty Calibration under
Covariate Shift [100.52588638477862]
We develop an approximate Bayesian inference scheme based on posterior regularisation.
We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.
arXiv Detail & Related papers (2020-06-26T13:50:19Z) - Getting a CLUE: A Method for Explaining Uncertainty Estimates [30.367995696223726]
We propose a novel method for interpreting uncertainty estimates from differentiable probabilistic models.
Our method, Counterfactual Latent Uncertainty Explanations (CLUE), indicates how to change an input, while keeping it on the data manifold.
arXiv Detail & Related papers (2020-06-11T21:53:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.