Graph Sparsification for Enhanced Conformal Prediction in Graph Neural Networks
- URL: http://arxiv.org/abs/2410.21618v1
- Date: Mon, 28 Oct 2024 23:53:51 GMT
- Title: Graph Sparsification for Enhanced Conformal Prediction in Graph Neural Networks
- Authors: Yuntian He, Pranav Maneriker, Anutam Srinivasan, Aditya T. Vadlamani, Srinivasan Parthasarathy,
- Abstract summary: Conformal Prediction is a robust framework that ensures reliable coverage across machine learning tasks.
SparGCP incorporates graph sparsification and a conformal prediction-specific objective into GNN training.
Experiments on real-world graph datasets demonstrate that SparGCP outperforms existing methods.
- Score: 5.896352342095999
- License:
- Abstract: Conformal Prediction is a robust framework that ensures reliable coverage across machine learning tasks. Although recent studies have applied conformal prediction to graph neural networks, they have largely emphasized post-hoc prediction set generation. Improving conformal prediction during the training stage remains unaddressed. In this work, we tackle this challenge from a denoising perspective by introducing SparGCP, which incorporates graph sparsification and a conformal prediction-specific objective into GNN training. SparGCP employs a parameterized graph sparsification module to filter out task-irrelevant edges, thereby improving conformal prediction efficiency. Extensive experiments on real-world graph datasets demonstrate that SparGCP outperforms existing methods, reducing prediction set sizes by an average of 32\% and scaling seamlessly to large networks on commodity GPUs.
Related papers
- Conformal Load Prediction with Transductive Graph Autoencoders [1.5634429098976406]
This paper describes a Graph Neural Network (GNN) approach for edge weight prediction with guaranteed coverage.
We leverage conformal prediction to calibrate the GNN outputs and produce valid prediction intervals.
arXiv Detail & Related papers (2024-06-12T14:47:27Z) - Conditional Shift-Robust Conformal Prediction for Graph Neural Network [0.0]
Graph Neural Networks (GNNs) have emerged as potent tools for predicting outcomes in graph-structured data.
Despite their efficacy, GNNs have limited ability to provide robust uncertainty estimates.
We propose Conditional Shift Robust (CondSR) conformal prediction for GNNs.
arXiv Detail & Related papers (2024-05-20T11:47:31Z) - Improving the interpretability of GNN predictions through conformal-based graph sparsification [9.550589670316523]
Graph Neural Networks (GNNs) have achieved state-of-the-art performance in solving graph classification tasks.
We propose a GNN emphtraining approach that finds the most predictive subgraph by removing edges and/or nodes.
We rely on reinforcement learning to solve the resulting bi-level optimization with a reward function based on conformal predictions.
arXiv Detail & Related papers (2024-04-18T17:34:47Z) - Uncertainty Quantification over Graph with Conformalized Graph Neural
Networks [52.20904874696597]
Graph Neural Networks (GNNs) are powerful machine learning prediction models on graph-structured data.
GNNs lack rigorous uncertainty estimates, limiting their reliable deployment in settings where the cost of errors is significant.
We propose conformalized GNN (CF-GNN), extending conformal prediction (CP) to graph-based models for guaranteed uncertainty estimates.
arXiv Detail & Related papers (2023-05-23T21:38:23Z) - Learning Large Graph Property Prediction via Graph Segment Training [61.344814074335304]
We propose a general framework that allows learning large graph property prediction with a constant memory footprint.
We refine the GST paradigm by introducing a historical embedding table to efficiently obtain embeddings for segments not sampled for backpropagation.
Our experiments show that GST-EFD is both memory-efficient and fast, while offering a slight boost on test accuracy over a typical full graph training regime.
arXiv Detail & Related papers (2023-05-21T02:53:25Z) - MentorGNN: Deriving Curriculum for Pre-Training GNNs [61.97574489259085]
We propose an end-to-end model named MentorGNN that aims to supervise the pre-training process of GNNs across graphs.
We shed new light on the problem of domain adaption on relational data (i.e., graphs) by deriving a natural and interpretable upper bound on the generalization error of the pre-trained GNNs.
arXiv Detail & Related papers (2022-08-21T15:12:08Z) - Comprehensive Graph Gradual Pruning for Sparse Training in Graph Neural
Networks [52.566735716983956]
We propose a graph gradual pruning framework termed CGP to dynamically prune GNNs.
Unlike LTH-based methods, the proposed CGP approach requires no re-training, which significantly reduces the computation costs.
Our proposed strategy greatly improves both training and inference efficiency while matching or even exceeding the accuracy of existing methods.
arXiv Detail & Related papers (2022-07-18T14:23:31Z) - Training Robust Graph Neural Networks with Topology Adaptive Edge
Dropping [116.26579152942162]
Graph neural networks (GNNs) are processing architectures that exploit graph structural information to model representations from network data.
Despite their success, GNNs suffer from sub-optimal generalization performance given limited training data.
This paper proposes Topology Adaptive Edge Dropping to improve generalization performance and learn robust GNN models.
arXiv Detail & Related papers (2021-06-05T13:20:36Z) - Learning to Drop: Robust Graph Neural Network via Topological Denoising [50.81722989898142]
We propose PTDNet, a parameterized topological denoising network, to improve the robustness and generalization performance of Graph Neural Networks (GNNs)
PTDNet prunes task-irrelevant edges by penalizing the number of edges in the sparsified graph with parameterized networks.
We show that PTDNet can improve the performance of GNNs significantly and the performance gain becomes larger for more noisy datasets.
arXiv Detail & Related papers (2020-11-13T18:53:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.