Model Inversion Attacks on Homogeneous and Heterogeneous Graph Neural
Networks
- URL: http://arxiv.org/abs/2310.09800v1
- Date: Sun, 15 Oct 2023 11:16:14 GMT
- Title: Model Inversion Attacks on Homogeneous and Heterogeneous Graph Neural
Networks
- Authors: Renyang Liu, Wei Zhou, Jinhong Zhang, Xiaoyuan Liu, Peiyuan Si, Haoran
Li
- Abstract summary: HomoGMI and HeteGMI are gradient-descent-based optimization methods that aim to maximize the cross-entropy loss on the target GNN.
HeteGMI is the first attempt to perform model inversion attacks on HeteGNNs.
- Score: 6.569169627119353
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, Graph Neural Networks (GNNs), including Homogeneous Graph Neural
Networks (HomoGNNs) and Heterogeneous Graph Neural Networks (HeteGNNs), have
made remarkable progress in many physical scenarios, especially in
communication applications. Despite achieving great success, the privacy issue
of such models has also received considerable attention. Previous studies have
shown that given a well-fitted target GNN, the attacker can reconstruct the
sensitive training graph of this model via model inversion attacks, leading to
significant privacy worries for the AI service provider. We advocate that the
vulnerability comes from the target GNN itself and the prior knowledge about
the shared properties in real-world graphs. Inspired by this, we propose a
novel model inversion attack method on HomoGNNs and HeteGNNs, namely HomoGMI
and HeteGMI. Specifically, HomoGMI and HeteGMI are gradient-descent-based
optimization methods that aim to maximize the cross-entropy loss on the target
GNN and the $1^{st}$ and $2^{nd}$-order proximities on the reconstructed graph.
Notably, to the best of our knowledge, HeteGMI is the first attempt to perform
model inversion attacks on HeteGNNs. Extensive experiments on multiple
benchmarks demonstrate that the proposed method can achieve better performance
than the competitors.
Related papers
- HGAttack: Transferable Heterogeneous Graph Adversarial Attack [63.35560741500611]
Heterogeneous Graph Neural Networks (HGNNs) are increasingly recognized for their performance in areas like the web and e-commerce.
This paper introduces HGAttack, the first dedicated gray box evasion attack method for heterogeneous graphs.
arXiv Detail & Related papers (2024-01-18T12:47:13Z) - Attentional Graph Neural Networks for Robust Massive Network
Localization [20.416879207269446]
Graph neural networks (GNNs) have emerged as a prominent tool for classification tasks in machine learning.
This paper integrates GNNs with attention mechanism to tackle a challenging nonlinear regression problem: network localization.
We first introduce a novel network localization method based on graph convolutional network (GCN), which exhibits exceptional precision even under severe non-line-of-sight (NLOS) conditions.
arXiv Detail & Related papers (2023-11-28T15:05:13Z) - Relation Embedding based Graph Neural Networks for Handling
Heterogeneous Graph [58.99478502486377]
We propose a simple yet efficient framework to make the homogeneous GNNs have adequate ability to handle heterogeneous graphs.
Specifically, we propose Relation Embedding based Graph Neural Networks (RE-GNNs), which employ only one parameter per relation to embed the importance of edge type relations and self-loop connections.
arXiv Detail & Related papers (2022-09-23T05:24:18Z) - Model Inversion Attacks against Graph Neural Networks [65.35955643325038]
We study model inversion attacks against Graph Neural Networks (GNNs)
In this paper, we present GraphMI to infer the private training graph data.
Our experimental results show that such defenses are not sufficiently effective and call for more advanced defenses against privacy attacks.
arXiv Detail & Related papers (2022-09-16T09:13:43Z) - MentorGNN: Deriving Curriculum for Pre-Training GNNs [61.97574489259085]
We propose an end-to-end model named MentorGNN that aims to supervise the pre-training process of GNNs across graphs.
We shed new light on the problem of domain adaption on relational data (i.e., graphs) by deriving a natural and interpretable upper bound on the generalization error of the pre-trained GNNs.
arXiv Detail & Related papers (2022-08-21T15:12:08Z) - EvenNet: Ignoring Odd-Hop Neighbors Improves Robustness of Graph Neural
Networks [51.42338058718487]
Graph Neural Networks (GNNs) have received extensive research attention for their promising performance in graph machine learning.
Existing approaches, such as GCN and GPRGNN, are not robust in the face of homophily changes on test graphs.
We propose EvenNet, a spectral GNN corresponding to an even-polynomial graph filter.
arXiv Detail & Related papers (2022-05-27T10:48:14Z) - CAP: Co-Adversarial Perturbation on Weights and Features for Improving
Generalization of Graph Neural Networks [59.692017490560275]
Adversarial training has been widely demonstrated to improve model's robustness against adversarial attacks.
It remains unclear how the adversarial training could improve the generalization abilities of GNNs in the graph analytics problem.
We construct the co-adversarial perturbation (CAP) optimization problem in terms of weights and features, and design the alternating adversarial perturbation algorithm to flatten the weight and feature loss landscapes alternately.
arXiv Detail & Related papers (2021-10-28T02:28:13Z) - Ranking Structured Objects with Graph Neural Networks [0.0]
RankGNNs are trained with a set of pair-wise preferences between graphs, suggesting that one of them is preferred over the other.
One practical application of this problem is drug screening, where an expert wants to find the most promising molecules in a large collection of drug candidates.
We empirically demonstrate that our proposed pair-wise RankGNN approach either significantly outperforms or at least matches the ranking performance of the naive point-wise baseline approach.
arXiv Detail & Related papers (2021-04-18T14:40:59Z) - Uncertainty-Matching Graph Neural Networks to Defend Against Poisoning
Attacks [43.60973654460398]
Graph Neural Networks (GNNs) are generalizations of neural networks to graph-structured data.
GNNs are vulnerable to adversarial attacks, i.e., a small perturbation to the structure can lead to a non-trivial performance degradation.
We propose Uncertainty Matching GNN (UM-GNN), that is aimed at improving the robustness of GNN models.
arXiv Detail & Related papers (2020-09-30T05:29:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.