NetFense: Adversarial Defenses against Privacy Attacks on Neural
Networks for Graph Data
- URL: http://arxiv.org/abs/2106.11865v1
- Date: Tue, 22 Jun 2021 15:32:50 GMT
- Title: NetFense: Adversarial Defenses against Privacy Attacks on Neural
Networks for Graph Data
- Authors: I-Chung Hsieh, Cheng-Te Li
- Abstract summary: We propose a novel research task, adversarial defenses against GNN-based privacy attacks.
We present a graph perturbation-based approach, NetFense, to achieve the goal.
- Score: 10.609715843964263
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent advances in protecting node privacy on graph data and attacking graph
neural networks (GNNs) gain much attention. The eye does not bring these two
essential tasks together yet. Imagine an adversary can utilize the powerful
GNNs to infer users' private labels in a social network. How can we
adversarially defend against such privacy attacks while maintaining the utility
of perturbed graphs? In this work, we propose a novel research task,
adversarial defenses against GNN-based privacy attacks, and present a graph
perturbation-based approach, NetFense, to achieve the goal. NetFense can
simultaneously keep graph data unnoticeability (i.e., having limited changes on
the graph structure), maintain the prediction confidence of targeted label
classification (i.e., preserving data utility), and reduce the prediction
confidence of private label classification (i.e., protecting the privacy of
nodes). Experiments conducted on single- and multiple-target perturbations
using three real graph data exhibit that the perturbed graphs by NetFense can
effectively maintain data utility (i.e., model unnoticeability) on targeted
label classification and significantly decrease the prediction confidence of
private label classification (i.e., privacy protection). Extensive studies also
bring several insights, such as the flexibility of NetFense, preserving local
neighborhoods in data unnoticeability, and better privacy protection for
high-degree nodes.
Related papers
- Link Stealing Attacks Against Inductive Graph Neural Networks [60.931106032824275]
A graph neural network (GNN) is a type of neural network that is specifically designed to process graph-structured data.
Previous work has shown that transductive GNNs are vulnerable to a series of privacy attacks.
This paper conducts a comprehensive privacy analysis of inductive GNNs through the lens of link stealing attacks.
arXiv Detail & Related papers (2024-05-09T14:03:52Z) - GNNBleed: Inference Attacks to Unveil Private Edges in Graphs with
Realistic Access to GNN Models [3.0509197593879844]
This paper investigates edge privacy in contexts where adversaries possess black-box GNN model access.
We introduce a series of privacy attacks grounded on the message-passing mechanism of GNNs.
arXiv Detail & Related papers (2023-11-03T20:26:03Z) - A Survey on Privacy in Graph Neural Networks: Attacks, Preservation, and
Applications [76.88662943995641]
Graph Neural Networks (GNNs) have gained significant attention owing to their ability to handle graph-structured data.
To address this issue, researchers have started to develop privacy-preserving GNNs.
Despite this progress, there is a lack of a comprehensive overview of the attacks and the techniques for preserving privacy in the graph domain.
arXiv Detail & Related papers (2023-08-31T00:31:08Z) - Node Injection Link Stealing Attack [0.649970685896541]
We present a stealthy and effective attack that exposes privacy vulnerabilities in Graph Neural Networks (GNNs) by inferring private links within graph-structured data.
Our work highlights the privacy vulnerabilities inherent in GNNs, underscoring the importance of developing robust privacy-preserving mechanisms for their application.
arXiv Detail & Related papers (2023-07-25T14:51:01Z) - A Unified Framework of Graph Information Bottleneck for Robustness and
Membership Privacy [43.11374582152925]
Graph Neural Networks (GNNs) have achieved great success in modeling graph-structured data.
GNNs are vulnerable to adversarial attacks which can fool the GNN model to make desired predictions.
In this work, we study a novel problem of developing robust and membership privacy-preserving GNNs.
arXiv Detail & Related papers (2023-06-14T16:11:00Z) - Privacy-Preserved Neural Graph Similarity Learning [99.78599103903777]
We propose a novel Privacy-Preserving neural Graph Matching network model, named PPGM, for graph similarity learning.
To prevent reconstruction attacks, the proposed model does not communicate node-level representations between devices.
To alleviate the attacks to graph properties, the obfuscated features that contain information from both vectors are communicated.
arXiv Detail & Related papers (2022-10-21T04:38:25Z) - Model Inversion Attacks against Graph Neural Networks [65.35955643325038]
We study model inversion attacks against Graph Neural Networks (GNNs)
In this paper, we present GraphMI to infer the private training graph data.
Our experimental results show that such defenses are not sufficiently effective and call for more advanced defenses against privacy attacks.
arXiv Detail & Related papers (2022-09-16T09:13:43Z) - Information Obfuscation of Graph Neural Networks [96.8421624921384]
We study the problem of protecting sensitive attributes by information obfuscation when learning with graph structured data.
We propose a framework to locally filter out pre-determined sensitive attributes via adversarial training with the total variation and the Wasserstein distance.
arXiv Detail & Related papers (2020-09-28T17:55:04Z) - Graph Backdoor [53.70971502299977]
We present GTA, the first backdoor attack on graph neural networks (GNNs)
GTA departs in significant ways: it defines triggers as specific subgraphs, including both topological structures and descriptive features.
It can be instantiated for both transductive (e.g., node classification) and inductive (e.g., graph classification) tasks.
arXiv Detail & Related papers (2020-06-21T19:45:30Z) - Locally Private Graph Neural Networks [12.473486843211573]
We study the problem of node data privacy, where graph nodes have potentially sensitive data that is kept private.
We develop a privacy-preserving, architecture-agnostic GNN learning algorithm with formal privacy guarantees.
Experiments conducted over real-world datasets demonstrate that our method can maintain a satisfying level of accuracy with low privacy loss.
arXiv Detail & Related papers (2020-06-09T22:36:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.