Characterizing the Influence of Graph Elements
- URL: http://arxiv.org/abs/2210.07441v1
- Date: Fri, 14 Oct 2022 01:04:28 GMT
- Title: Characterizing the Influence of Graph Elements
- Authors: Zizhang Chen, Peizhao Li, Hongfu Liu and Pengyu Hong
- Abstract summary: The influence function of graph convolution networks (GCNs) can shed light on the effects of removing training nodes/edges from an input graph.
We show that the influence function of an SGC model could be used to estimate the impact of removing training nodes/edges on the test performance of the SGC without re-training the model.
- Score: 24.241010101383505
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Influence function, a method from robust statistics, measures the changes of
model parameters or some functions about model parameters concerning the
removal or modification of training instances. It is an efficient and useful
post-hoc method for studying the interpretability of machine learning models
without the need for expensive model re-training. Recently, graph convolution
networks (GCNs), which operate on graph data, have attracted a great deal of
attention. However, there is no preceding research on the influence functions
of GCNs to shed light on the effects of removing training nodes/edges from an
input graph. Since the nodes/edges in a graph are interdependent in GCNs, it is
challenging to derive influence functions for GCNs. To fill this gap, we
started with the simple graph convolution (SGC) model that operates on an
attributed graph and formulated an influence function to approximate the
changes in model parameters when a node or an edge is removed from an
attributed graph. Moreover, we theoretically analyzed the error bound of the
estimated influence of removing an edge. We experimentally validated the
accuracy and effectiveness of our influence estimation function. In addition,
we showed that the influence function of an SGC model could be used to estimate
the impact of removing training nodes/edges on the test performance of the SGC
without re-training the model. Finally, we demonstrated how to use influence
functions to guide the adversarial attacks on GCNs effectively.
Related papers
- Erase then Rectify: A Training-Free Parameter Editing Approach for Cost-Effective Graph Unlearning [17.85404473268992]
Graph unlearning aims to eliminate the influence of nodes, edges, or attributes from a trained Graph Neural Network (GNN)
Existing graph unlearning techniques often necessitate additional training on the remaining data, leading to significant computational costs.
We propose a two-stage training-free approach, Erase then Rectify (ETR), designed for efficient and scalable graph unlearning.
arXiv Detail & Related papers (2024-09-25T07:20:59Z) - Gradformer: Graph Transformer with Exponential Decay [69.50738015412189]
Self-attention mechanism in Graph Transformers (GTs) overlooks the graph's inductive biases, particularly biases related to structure.
This paper presents Gradformer, a method innovatively integrating GT with the intrinsic inductive bias.
Gradformer consistently outperforms the Graph Neural Network and GT baseline models in various graph classification and regression tasks.
arXiv Detail & Related papers (2024-04-24T08:37:13Z) - GIF: A General Graph Unlearning Strategy via Influence Function [63.52038638220563]
Graph Influence Function (GIF) is a model-agnostic unlearning method that can efficiently and accurately estimate parameter changes in response to a $epsilon$-mass perturbation in deleted data.
We conduct extensive experiments on four representative GNN models and three benchmark datasets to justify GIF's superiority in terms of unlearning efficacy, model utility, and unlearning efficiency.
arXiv Detail & Related papers (2023-04-06T03:02:54Z) - Robust Causal Graph Representation Learning against Confounding Effects [21.380907101361643]
We propose Robust Causal Graph Representation Learning (RCGRL) to learn robust graph representations against confounding effects.
RCGRL introduces an active approach to generate instrumental variables under unconditional moment restrictions, which empowers the graph representation learning model to eliminate confounders.
arXiv Detail & Related papers (2022-08-18T01:31:25Z) - A Graph Data Augmentation Strategy with Entropy Preserving [11.886325179121226]
We introduce a novel graph entropy definition as a quantitative index to evaluate feature information among a graph.
Under considerations of preserving graph entropy, we propose an effective strategy to generate training data using a perturbed mechanism.
Our proposed approach significantly enhances the robustness and generalization ability of GCNs during the training process.
arXiv Detail & Related papers (2021-07-13T12:58:32Z) - GraphMI: Extracting Private Graph Data from Graph Neural Networks [59.05178231559796]
We present textbfGraph textbfModel textbfInversion attack (GraphMI), which aims to extract private graph data of the training graph by inverting GNN.
Specifically, we propose a projected gradient module to tackle the discreteness of graph edges while preserving the sparsity and smoothness of graph features.
We design a graph auto-encoder module to efficiently exploit graph topology, node attributes, and target model parameters for edge inference.
arXiv Detail & Related papers (2021-06-05T07:07:52Z) - FastIF: Scalable Influence Functions for Efficient Model Interpretation
and Debugging [112.19994766375231]
Influence functions approximate the 'influences' of training data-points for test predictions.
We present FastIF, a set of simple modifications to influence functions that significantly improves their run-time.
Our experiments demonstrate the potential of influence functions in model interpretation and correcting model errors.
arXiv Detail & Related papers (2020-12-31T18:02:34Z) - Robust Optimization as Data Augmentation for Large-scale Graphs [117.2376815614148]
We propose FLAG (Free Large-scale Adversarial Augmentation on Graphs), which iteratively augments node features with gradient-based adversarial perturbations during training.
FLAG is a general-purpose approach for graph data, which universally works in node classification, link prediction, and graph classification tasks.
arXiv Detail & Related papers (2020-10-19T21:51:47Z) - CatGCN: Graph Convolutional Networks with Categorical Node Features [99.555850712725]
CatGCN is tailored for graph learning when the node features are categorical.
We train CatGCN in an end-to-end fashion and demonstrate it on semi-supervised node classification.
arXiv Detail & Related papers (2020-09-11T09:25:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.