Be Confident! Towards Trustworthy Graph Neural Networks via Confidence
Calibration
- URL: http://arxiv.org/abs/2109.14285v1
- Date: Wed, 29 Sep 2021 09:08:20 GMT
- Title: Be Confident! Towards Trustworthy Graph Neural Networks via Confidence
Calibration
- Authors: Xiao Wang, Hongrui Liu, Chuan Shi, Cheng Yang
- Abstract summary: Despite Graph Neural Networks (GNNs) having remarkable accuracy, whether the results are trustworthy is still unexplored.
Previous studies suggest that many modern neural networks are over-confident on the predictions.
We propose a novel trustworthy GNN model by designing a topology-aware post-hoc calibration function.
- Score: 32.26725705900001
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Despite Graph Neural Networks (GNNs) have achieved remarkable accuracy,
whether the results are trustworthy is still unexplored. Previous studies
suggest that many modern neural networks are over-confident on the predictions,
however, surprisingly, we discover that GNNs are primarily in the opposite
direction, i.e., GNNs are under-confident. Therefore, the confidence
calibration for GNNs is highly desired. In this paper, we propose a novel
trustworthy GNN model by designing a topology-aware post-hoc calibration
function. Specifically, we first verify that the confidence distribution in a
graph has homophily property, and this finding inspires us to design a
calibration GNN model (CaGCN) to learn the calibration function. CaGCN is able
to obtain a unique transformation from logits of GNNs to the calibrated
confidence for each node, meanwhile, such transformation is able to preserve
the order between classes, satisfying the accuracy-preserving property.
Moreover, we apply the calibration GNN to self-training framework, showing that
more trustworthy pseudo labels can be obtained with the calibrated confidence
and further improve the performance. Extensive experiments demonstrate the
effectiveness of our proposed model in terms of both calibration and accuracy.
Related papers
- Online GNN Evaluation Under Test-time Graph Distribution Shifts [92.4376834462224]
A new research problem, online GNN evaluation, aims to provide valuable insights into the well-trained GNNs's ability to generalize to real-world unlabeled graphs.
We develop an effective learning behavior discrepancy score, dubbed LeBeD, to estimate the test-time generalization errors of well-trained GNN models.
arXiv Detail & Related papers (2024-03-15T01:28:08Z) - Uncertainty in Graph Neural Networks: A Survey [50.63474656037679]
Graph Neural Networks (GNNs) have been extensively used in various real-world applications.
However, the predictive uncertainty of GNNs stemming from diverse sources can lead to unstable and erroneous predictions.
This survey aims to provide a comprehensive overview of the GNNs from the perspective of uncertainty.
arXiv Detail & Related papers (2024-03-11T21:54:52Z) - Accurate and Scalable Estimation of Epistemic Uncertainty for Graph
Neural Networks [40.95782849532316]
We propose a novel training framework designed to improve intrinsic GNN uncertainty estimates.
Our framework adapts the principle of centering data to graph data through novel graph anchoring strategies.
Our work provides insights into uncertainty estimation for GNNs, and demonstrates the utility of G-$Delta$UQ in obtaining reliable estimates.
arXiv Detail & Related papers (2024-01-07T00:58:33Z) - Graph Neural Networks are Inherently Good Generalizers: Insights by
Bridging GNNs and MLPs [71.93227401463199]
This paper pinpoints the major source of GNNs' performance gain to their intrinsic capability, by introducing an intermediate model class dubbed as P(ropagational)MLP.
We observe that PMLPs consistently perform on par with (or even exceed) their GNN counterparts, while being much more efficient in training.
arXiv Detail & Related papers (2022-12-18T08:17:32Z) - What Makes Graph Neural Networks Miscalibrated? [48.00374886504513]
We conduct a systematic study on the calibration qualities of graph neural networks (GNNs)
We identify five factors which influence the calibration of GNNs: general under-confident tendency, diversity of nodewise predictive distributions, distance to training nodes, relative confidence level, and neighborhood similarity.
We design a novel calibration method named Graph Attention Temperature Scaling (GATS), which is tailored for calibrating graph neural networks.
arXiv Detail & Related papers (2022-10-12T16:41:42Z) - TrustGNN: Graph Neural Network based Trust Evaluation via Learnable
Propagative and Composable Nature [63.78619502896071]
Trust evaluation is critical for many applications such as cyber security, social communication and recommender systems.
We propose a new GNN based trust evaluation method named TrustGNN, which integrates smartly the propagative and composable nature of trust graphs.
Specifically, TrustGNN designs specific propagative patterns for different propagative processes of trust, and distinguishes the contribution of different propagative processes to create new trust.
arXiv Detail & Related papers (2022-05-25T13:57:03Z) - On the Dark Side of Calibration for Modern Neural Networks [65.83956184145477]
We show the breakdown of expected calibration error (ECE) into predicted confidence and refinement.
We highlight that regularisation based calibration only focuses on naively reducing a model's confidence.
We find that many calibration approaches with the likes of label smoothing, mixup etc. lower the utility of a DNN by degrading its refinement.
arXiv Detail & Related papers (2021-06-17T11:04:14Z) - Calibrating Deep Neural Network Classifiers on Out-of-Distribution
Datasets [20.456742449675904]
CCAC (Confidence with an Auxiliary Class) is a new post-hoc confidence calibration method for deep neural network (DNN)
Key novelty of CCAC is an auxiliary class in the calibration model which separates mis-classified samples from correctly classified ones.
Our experiments on different DNN models, datasets and applications show that CCAC can consistently outperform the prior post-hoc calibration methods.
arXiv Detail & Related papers (2020-06-16T04:06:21Z) - On Calibration of Mixup Training for Deep Neural Networks [1.6242924916178283]
We argue and provide empirical evidence that, due to its fundamentals, Mixup does not necessarily improve calibration.
Our loss is inspired by Bayes decision theory and introduces a new training framework for designing losses for probabilistic modelling.
We provide state-of-the-art accuracy with consistent improvements in calibration performance.
arXiv Detail & Related papers (2020-03-22T16:54:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.