Distribution Free Prediction Sets for Node Classification
- URL: http://arxiv.org/abs/2211.14555v3
- Date: Mon, 8 Jan 2024 19:54:27 GMT
- Title: Distribution Free Prediction Sets for Node Classification
- Authors: Jase Clarkson
- Abstract summary: We leverage recent advances in conformal prediction to construct prediction sets for node classification in inductive learning scenarios.
We show through experiments on standard benchmark datasets using popular GNN models that our approach provides tighter and better prediction sets than a naive application of conformal prediction.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph Neural Networks (GNNs) are able to achieve high classification accuracy
on many important real world datasets, but provide no rigorous notion of
predictive uncertainty. Quantifying the confidence of GNN models is difficult
due to the dependence between datapoints induced by the graph structure. We
leverage recent advances in conformal prediction to construct prediction sets
for node classification in inductive learning scenarios. We do this by taking
an existing approach for conformal classification that relies on
\textit{exchangeable} data and modifying it by appropriately weighting the
conformal scores to reflect the network structure. We show through experiments
on standard benchmark datasets using popular GNN models that our approach
provides tighter and better calibrated prediction sets than a naive application
of conformal prediction.
Related papers
- RoCP-GNN: Robust Conformal Prediction for Graph Neural Networks in Node-Classification [0.0]
Graph Neural Networks (GNNs) have emerged as powerful tools for predicting outcomes in graph-structured data.
One way to address this issue is by providing prediction sets that contain the true label with a predefined probability margin.
We propose a novel approach termed Robust Conformal Prediction for GNNs (RoCP-GNN)
Our approach robustly predicts outcomes with any predictive GNN model while quantifying the uncertainty in predictions within the realm of graph-based semi-supervised learning (SSL)
arXiv Detail & Related papers (2024-08-25T12:51:19Z) - Conditional Shift-Robust Conformal Prediction for Graph Neural Network [0.0]
Graph Neural Networks (GNNs) have emerged as potent tools for predicting outcomes in graph-structured data.
Despite their efficacy, GNNs have limited ability to provide robust uncertainty estimates.
We propose Conditional Shift Robust (CondSR) conformal prediction for GNNs.
arXiv Detail & Related papers (2024-05-20T11:47:31Z) - Improving the interpretability of GNN predictions through conformal-based graph sparsification [9.550589670316523]
Graph Neural Networks (GNNs) have achieved state-of-the-art performance in solving graph classification tasks.
We propose a GNN emphtraining approach that finds the most predictive subgraph by removing edges and/or nodes.
We rely on reinforcement learning to solve the resulting bi-level optimization with a reward function based on conformal predictions.
arXiv Detail & Related papers (2024-04-18T17:34:47Z) - Online GNN Evaluation Under Test-time Graph Distribution Shifts [92.4376834462224]
A new research problem, online GNN evaluation, aims to provide valuable insights into the well-trained GNNs's ability to generalize to real-world unlabeled graphs.
We develop an effective learning behavior discrepancy score, dubbed LeBeD, to estimate the test-time generalization errors of well-trained GNN models.
arXiv Detail & Related papers (2024-03-15T01:28:08Z) - GNNEvaluator: Evaluating GNN Performance On Unseen Graphs Without Labels [81.93520935479984]
We study a new problem, GNN model evaluation, that aims to assess the performance of a specific GNN model trained on labeled and observed graphs.
We propose a two-stage GNN model evaluation framework, including (1) DiscGraph set construction and (2) GNNEvaluator training and inference.
Under the effective training supervision from the DiscGraph set, GNNEvaluator learns to precisely estimate node classification accuracy of the to-be-evaluated GNN model.
arXiv Detail & Related papers (2023-10-23T05:51:59Z) - Uncertainty Quantification over Graph with Conformalized Graph Neural
Networks [52.20904874696597]
Graph Neural Networks (GNNs) are powerful machine learning prediction models on graph-structured data.
GNNs lack rigorous uncertainty estimates, limiting their reliable deployment in settings where the cost of errors is significant.
We propose conformalized GNN (CF-GNN), extending conformal prediction (CP) to graph-based models for guaranteed uncertainty estimates.
arXiv Detail & Related papers (2023-05-23T21:38:23Z) - Interpreting Graph Neural Networks for NLP With Differentiable Edge
Masking [63.49779304362376]
Graph neural networks (GNNs) have become a popular approach to integrating structural inductive biases into NLP models.
We introduce a post-hoc method for interpreting the predictions of GNNs which identifies unnecessary edges.
We show that we can drop a large proportion of edges without deteriorating the performance of the model.
arXiv Detail & Related papers (2020-10-01T17:51:19Z) - Unlabelled Data Improves Bayesian Uncertainty Calibration under
Covariate Shift [100.52588638477862]
We develop an approximate Bayesian inference scheme based on posterior regularisation.
We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.
arXiv Detail & Related papers (2020-06-26T13:50:19Z) - Bayesian Graph Neural Networks with Adaptive Connection Sampling [62.51689735630133]
We propose a unified framework for adaptive connection sampling in graph neural networks (GNNs)
The proposed framework not only alleviates over-smoothing and over-fitting tendencies of deep GNNs, but also enables learning with uncertainty in graph analytic tasks with GNNs.
arXiv Detail & Related papers (2020-06-07T07:06:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.