Editable Graph Neural Network for Node Classifications
- URL: http://arxiv.org/abs/2305.15529v1
- Date: Wed, 24 May 2023 19:35:42 GMT
- Title: Editable Graph Neural Network for Node Classifications
- Authors: Zirui Liu, Zhimeng Jiang, Shaochen Zhong, Kaixiong Zhou, Li Li, Rui
Chen, Soo-Hyun Choi, Xia Hu
- Abstract summary: We propose underlineEditable underlineGraph underlineNeural underlineNetworks (EGNN) to correct the model prediction on misclassified nodes.
EGNN simply stitches an underlying GNNs, where the weights of GNNs are frozen during model editing.
- Score: 43.39295712456175
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite Graph Neural Networks (GNNs) have achieved prominent success in many
graph-based learning problem, such as credit risk assessment in financial
networks and fake news detection in social networks. However, the trained GNNs
still make errors and these errors may cause serious negative impact on
society. \textit{Model editing}, which corrects the model behavior on wrongly
predicted target samples while leaving model predictions unchanged on unrelated
samples, has garnered significant interest in the fields of computer vision and
natural language processing. However, model editing for graph neural networks
(GNNs) is rarely explored, despite GNNs' widespread applicability. To fill the
gap, we first observe that existing model editing methods significantly
deteriorate prediction accuracy (up to $50\%$ accuracy drop) in GNNs while a
slight accuracy drop in multi-layer perception (MLP). The rationale behind this
observation is that the node aggregation in GNNs will spread the editing effect
throughout the whole graph. This propagation pushes the node representation far
from its original one. Motivated by this observation, we propose
\underline{E}ditable \underline{G}raph \underline{N}eural \underline{N}etworks
(EGNN), a neighbor propagation-free approach to correct the model prediction on
misclassified nodes. Specifically, EGNN simply stitches an MLP to the
underlying GNNs, where the weights of GNNs are frozen during model editing. In
this way, EGNN disables the propagation during editing while still utilizing
the neighbor propagation scheme for node prediction to obtain satisfactory
results. Experiments demonstrate that EGNN outperforms existing baselines in
terms of effectiveness (correcting wrong predictions with lower accuracy drop),
generalizability (correcting wrong predictions for other similar nodes), and
efficiency (low training time and memory) on various graph datasets.
Related papers
- Gradient Rewiring for Editable Graph Neural Network Training [84.77778876113099]
underlineGradient underlineRewiring method for underlineEditable graph neural network training, named textbfGRE.
We propose a simple yet effective underlineGradient underlineRewiring method for underlineEditable graph neural network training, named textbfGRE.
arXiv Detail & Related papers (2024-10-21T01:01:50Z) - RoCP-GNN: Robust Conformal Prediction for Graph Neural Networks in Node-Classification [0.0]
Graph Neural Networks (GNNs) have emerged as powerful tools for predicting outcomes in graph-structured data.
One way to address this issue is by providing prediction sets that contain the true label with a predefined probability margin.
We propose a novel approach termed Robust Conformal Prediction for GNNs (RoCP-GNN)
Our approach robustly predicts outcomes with any predictive GNN model while quantifying the uncertainty in predictions within the realm of graph-based semi-supervised learning (SSL)
arXiv Detail & Related papers (2024-08-25T12:51:19Z) - In-n-Out: Calibrating Graph Neural Networks for Link Prediction [22.729733086532875]
We show that graph neural networks (GNNs) may be overconfident in negative predictions while being underconfident in positive ones.
We propose IN-N-OUT, the first-ever method to calibrate GNNs for link prediction.
arXiv Detail & Related papers (2024-03-07T15:54:46Z) - GNNEvaluator: Evaluating GNN Performance On Unseen Graphs Without Labels [81.93520935479984]
We study a new problem, GNN model evaluation, that aims to assess the performance of a specific GNN model trained on labeled and observed graphs.
We propose a two-stage GNN model evaluation framework, including (1) DiscGraph set construction and (2) GNNEvaluator training and inference.
Under the effective training supervision from the DiscGraph set, GNNEvaluator learns to precisely estimate node classification accuracy of the to-be-evaluated GNN model.
arXiv Detail & Related papers (2023-10-23T05:51:59Z) - Interpreting Unfairness in Graph Neural Networks via Training Node
Attribution [46.384034587689136]
We study a novel problem of interpreting GNN unfairness through attributing it to the influence of training nodes.
Specifically, we propose a novel strategy named Probabilistic Distribution Disparity (PDD) to measure the bias exhibited in GNNs.
We verify the validity of PDD and the effectiveness of influence estimation through experiments on real-world datasets.
arXiv Detail & Related papers (2022-11-25T21:52:30Z) - On Structural Explanation of Bias in Graph Neural Networks [40.323880315453906]
Graph Neural Networks (GNNs) have shown satisfying performance in various graph analytical problems.
GNNs could yield biased results against certain demographic subgroups.
We study a novel research problem of structural explanation of bias in GNNs.
arXiv Detail & Related papers (2022-06-24T06:49:21Z) - Network In Graph Neural Network [9.951298152023691]
We present a model-agnostic methodology that allows arbitrary GNN models to increase their model capacity by making the model deeper.
Instead of adding or widening GNN layers, NGNN deepens a GNN model by inserting non-linear feedforward neural network layer(s) within each GNN layer.
arXiv Detail & Related papers (2021-11-23T03:58:56Z) - Shift-Robust GNNs: Overcoming the Limitations of Localized Graph
Training data [52.771780951404565]
Shift-Robust GNN (SR-GNN) is designed to account for distributional differences between biased training data and the graph's true inference distribution.
We show that SR-GNN outperforms other GNN baselines by accuracy, eliminating at least (40%) of the negative effects introduced by biased training data.
arXiv Detail & Related papers (2021-08-02T18:00:38Z) - A Unified View on Graph Neural Networks as Graph Signal Denoising [49.980783124401555]
Graph Neural Networks (GNNs) have risen to prominence in learning representations for graph structured data.
In this work, we establish mathematically that the aggregation processes in a group of representative GNN models can be regarded as solving a graph denoising problem.
We instantiate a novel GNN model, ADA-UGNN, derived from UGNN, to handle graphs with adaptive smoothness across nodes.
arXiv Detail & Related papers (2020-10-05T04:57:18Z) - XGNN: Towards Model-Level Explanations of Graph Neural Networks [113.51160387804484]
Graphs neural networks (GNNs) learn node features by aggregating and combining neighbor information.
GNNs are mostly treated as black-boxes and lack human intelligible explanations.
We propose a novel approach, known as XGNN, to interpret GNNs at the model-level.
arXiv Detail & Related papers (2020-06-03T23:52:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.