Similarity-Navigated Conformal Prediction for Graph Neural Networks
- URL: http://arxiv.org/abs/2405.14303v2
- Date: Sat, 04 Jan 2025 01:31:42 GMT
- Title: Similarity-Navigated Conformal Prediction for Graph Neural Networks
- Authors: Jianqing Song, Jianguo Huang, Wenyu Jiang, Baoming Zhang, Shuangjie Li, Chongjun Wang,
- Abstract summary: We propose a novel algorithm named Similarity-Navigated Adaptive Prediction Sets (SNAPS)
SNAPS aggregates the non-conformity scores based on feature similarity and structural neighborhood.
It can generate compact prediction sets and increase the singleton hit ratio.
- Score: 6.318857043484474
- License:
- Abstract: Graph Neural Networks have achieved remarkable accuracy in semi-supervised node classification tasks. However, these results lack reliable uncertainty estimates. Conformal prediction methods provide a theoretical guarantee for node classification tasks, ensuring that the conformal prediction set contains the ground-truth label with a desired probability (e.g., 95%). In this paper, we empirically show that for each node, aggregating the non-conformity scores of nodes with the same label can improve the efficiency of conformal prediction sets while maintaining valid marginal coverage. This observation motivates us to propose a novel algorithm named Similarity-Navigated Adaptive Prediction Sets (SNAPS), which aggregates the non-conformity scores based on feature similarity and structural neighborhood. The key idea behind SNAPS is that nodes with high feature similarity or direct connections tend to have the same label. By incorporating adaptive similar nodes information, SNAPS can generate compact prediction sets and increase the singleton hit ratio (correct prediction sets of size one). Moreover, we theoretically provide a finite-sample coverage guarantee of SNAPS. Extensive experiments demonstrate the superiority of SNAPS, improving the efficiency of prediction sets and singleton hit ratio while maintaining valid coverage.
Related papers
- Conformal Prediction Sets with Improved Conditional Coverage using Trust Scores [52.92618442300405]
It is impossible to achieve exact, distribution-free conditional coverage in finite samples.
We propose an alternative conformal prediction algorithm that targets coverage where it matters most.
arXiv Detail & Related papers (2025-01-17T12:01:56Z) - Enhancing Trustworthiness of Graph Neural Networks with Rank-Based Conformal Training [17.120502204791407]
Conformal Prediction can produce statistically guaranteed uncertainty estimates.
We propose a Rank-based CP during training framework to GNNs (RCP-GNN) for reliable uncertainty estimates.
arXiv Detail & Related papers (2025-01-06T05:19:24Z) - Matcha: Mitigating Graph Structure Shifts with Test-Time Adaptation [66.40525136929398]
Test-time adaptation (TTA) has attracted attention due to its ability to adapt a pre-trained model to a target domain, without re-accessing the source domain.
We propose Matcha, an innovative framework designed for effective and efficient adaptation to structure shifts in graphs.
We validate the effectiveness of Matcha on both synthetic and real-world datasets, demonstrating its robustness across various combinations of structure and attribute shifts.
arXiv Detail & Related papers (2024-10-09T15:15:40Z) - Conformal Inductive Graph Neural Networks [58.450154976190795]
Conformal prediction (CP) transforms any model's output into prediction sets guaranteed to include (cover) the true label.
CP requires exchangeability, a relaxation of the i.i.d. assumption, to obtain a valid distribution-free coverage guarantee.
conventional CP cannot be applied in inductive settings due to the implicit shift in the (calibration) scores caused by message passing with the new nodes.
We prove that the guarantee holds independently of the prediction time, e.g. upon arrival of a new node/edge or at any subsequent moment.
arXiv Detail & Related papers (2024-07-12T11:12:49Z) - Endowing Pre-trained Graph Models with Provable Fairness [49.8431177748876]
We propose a novel adapter-tuning framework that endows pre-trained graph models with provable fairness (called GraphPAR)
Specifically, we design a sensitive semantic augmenter on node representations, to extend the node representations with different sensitive attribute semantics for each node.
With GraphPAR, we quantify whether the fairness of each node is provable, i.e., predictions are always fair within a certain range of sensitive attribute semantics.
arXiv Detail & Related papers (2024-02-19T14:16:08Z) - Uncertainty Quantification over Graph with Conformalized Graph Neural
Networks [52.20904874696597]
Graph Neural Networks (GNNs) are powerful machine learning prediction models on graph-structured data.
GNNs lack rigorous uncertainty estimates, limiting their reliable deployment in settings where the cost of errors is significant.
We propose conformalized GNN (CF-GNN), extending conformal prediction (CP) to graph-based models for guaranteed uncertainty estimates.
arXiv Detail & Related papers (2023-05-23T21:38:23Z) - Refined Edge Usage of Graph Neural Networks for Edge Prediction [51.06557652109059]
We propose a novel edge prediction paradigm named Edge-aware Message PassIng neuRal nEtworks (EMPIRE)
We first introduce an edge splitting technique to specify use of each edge where each edge is solely used as either the topology or the supervision.
In order to emphasize the differences between pairs connected by supervision edges and pairs unconnected, we further weight the messages to highlight the relative ones that can reflect the differences.
arXiv Detail & Related papers (2022-12-25T23:19:56Z) - Distribution Free Prediction Sets for Node Classification [0.0]
We leverage recent advances in conformal prediction to construct prediction sets for node classification in inductive learning scenarios.
We show through experiments on standard benchmark datasets using popular GNN models that our approach provides tighter and better prediction sets than a naive application of conformal prediction.
arXiv Detail & Related papers (2022-11-26T12:54:45Z) - Label-Only Membership Inference Attack against Node-Level Graph Neural
Networks [30.137860266059004]
Graph Neural Networks (GNNs) are vulnerable to Membership Inference Attacks (MIAs)
We propose a label-only MIA against GNNs for node classification with the help of GNNs' flexible prediction mechanism.
Our attacking method achieves around 60% accuracy, precision, and Area Under the Curve (AUC) for most datasets and GNN models.
arXiv Detail & Related papers (2022-07-27T19:46:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.