Private Graph Extraction via Feature Explanations
- URL: http://arxiv.org/abs/2206.14724v2
- Date: Thu, 2 Nov 2023 05:32:31 GMT
- Title: Private Graph Extraction via Feature Explanations
- Authors: Iyiola E. Olatunji, Mandeep Rathee, Thorben Funke, Megha Khosla
- Abstract summary: We study the interplay of privacy and interpretability in graph machine learning through graph reconstruction attacks.
We show that additional knowledge of post-hoc feature explanations substantially increases the success rate of these attacks.
We propose a defense based on a randomized response mechanism for releasing the explanations, which substantially reduces the attack success rate.
- Score: 0.7442906193848509
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Privacy and interpretability are two important ingredients for achieving
trustworthy machine learning. We study the interplay of these two aspects in
graph machine learning through graph reconstruction attacks. The goal of the
adversary here is to reconstruct the graph structure of the training data given
access to model explanations. Based on the different kinds of auxiliary
information available to the adversary, we propose several graph reconstruction
attacks. We show that additional knowledge of post-hoc feature explanations
substantially increases the success rate of these attacks. Further, we
investigate in detail the differences between attack performance with respect
to three different classes of explanation methods for graph neural networks:
gradient-based, perturbation-based, and surrogate model-based methods. While
gradient-based explanations reveal the most in terms of the graph structure, we
find that these explanations do not always score high in utility. For the other
two classes of explanations, privacy leakage increases with an increase in
explanation utility. Finally, we propose a defense based on a randomized
response mechanism for releasing the explanations, which substantially reduces
the attack success rate. Our code is available at
https://github.com/iyempissy/graph-stealing-attacks-with-explanation
Related papers
- CLEAR: Generative Counterfactual Explanations on Graphs [60.30009215290265]
We study the problem of counterfactual explanation generation on graphs.
A few studies have explored counterfactual explanations on graphs, but many challenges of this problem are still not well-addressed.
We propose a novel framework CLEAR which aims to generate counterfactual explanations on graphs for graph-level prediction models.
arXiv Detail & Related papers (2022-10-16T04:35:32Z) - Reinforced Causal Explainer for Graph Neural Networks [112.57265240212001]
Explainability is crucial for probing graph neural networks (GNNs)
We propose a reinforcement learning agent, Reinforced Causal Explainer (RC-Explainer)
RC-Explainer generates faithful and concise explanations, and has a better power to unseen graphs.
arXiv Detail & Related papers (2022-04-23T09:13:25Z) - Exploring High-Order Structure for Robust Graph Structure Learning [33.62223306095631]
Graph Neural Networks (GNNs) are vulnerable to adversarial attack, i.e., an imperceptible structure perturbation can fool GNNs to make wrong predictions.
In this paper, we analyze the adversarial attack on graphs from the perspective of feature smoothness.
We propose a novel algorithm that incorporates the high-order structural information into the graph structure learning.
arXiv Detail & Related papers (2022-03-22T07:03:08Z) - Inference Attacks Against Graph Neural Networks [33.19531086886817]
Graph embedding is a powerful tool to solve the graph analytics problem.
While sharing graph embedding is intriguing, the associated privacy risks are unexplored.
We systematically investigate the information leakage of the graph embedding by mounting three inference attacks.
arXiv Detail & Related papers (2021-10-06T10:08:11Z) - Jointly Attacking Graph Neural Network and its Explanations [50.231829335996814]
Graph Neural Networks (GNNs) have boosted the performance for many graph-related tasks.
Recent studies have shown that GNNs are highly vulnerable to adversarial attacks, where adversaries can mislead the GNNs' prediction by modifying graphs.
We propose a novel attack framework (GEAttack) which can attack both a GNN model and its explanations by simultaneously exploiting their vulnerabilities.
arXiv Detail & Related papers (2021-08-07T07:44:33Z) - ExplaGraphs: An Explanation Graph Generation Task for Structured
Commonsense Reasoning [65.15423587105472]
We present a new generative and structured commonsense-reasoning task (and an associated dataset) of explanation graph generation for stance prediction.
Specifically, given a belief and an argument, a model has to predict whether the argument supports or counters the belief and also generate a commonsense-augmented graph that serves as non-trivial, complete, and unambiguous explanation for the predicted stance.
A significant 83% of our graphs contain external commonsense nodes with diverse structures and reasoning depths.
arXiv Detail & Related papers (2021-04-15T17:51:36Z) - Structural Information Preserving for Graph-to-Text Generation [59.00642847499138]
The task of graph-to-text generation aims at producing sentences that preserve the meaning of input graphs.
We propose to tackle this problem by leveraging richer training signals that can guide our model for preserving input information.
Experiments on two benchmarks for graph-to-text generation show the effectiveness of our approach over a state-of-the-art baseline.
arXiv Detail & Related papers (2021-02-12T20:09:01Z) - Adversarial Privacy Preserving Graph Embedding against Inference Attack [9.90348608491218]
Graph embedding has been proved extremely useful to learn low-dimensional feature representations from graph structured data.
Existing graph embedding methods do not consider users' privacy to prevent inference attacks.
We propose Adrial Privacy Graph Embedding (APGE), a graph adversarial training framework that integrates the disentangling and purging mechanisms to remove users' private information from learned node representations.
arXiv Detail & Related papers (2020-08-30T00:06:49Z) - Graph Structure Learning for Robust Graph Neural Networks [63.04935468644495]
Graph Neural Networks (GNNs) are powerful tools in representation learning for graphs.
Recent studies show that GNNs are vulnerable to carefully-crafted perturbations, called adversarial attacks.
We propose a general framework Pro-GNN, which can jointly learn a structural graph and a robust graph neural network model.
arXiv Detail & Related papers (2020-05-20T17:07:05Z) - Adversarial Attacks on Graph Neural Networks via Meta Learning [4.139895092509202]
We investigate training time attacks on graph neural networks for node classification perturbing the discrete graph structure.
Our core principle is to use meta-gradients to solve the bilevel problem underlying training-time attacks.
arXiv Detail & Related papers (2019-02-22T09:20:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.