A Unified Framework of Graph Information Bottleneck for Robustness and
Membership Privacy
- URL: http://arxiv.org/abs/2306.08604v1
- Date: Wed, 14 Jun 2023 16:11:00 GMT
- Title: A Unified Framework of Graph Information Bottleneck for Robustness and
Membership Privacy
- Authors: Enyan Dai, Limeng Cui, Zhengyang Wang, Xianfeng Tang, Yinghan Wang,
Monica Cheng, Bing Yin, Suhang Wang
- Abstract summary: Graph Neural Networks (GNNs) have achieved great success in modeling graph-structured data.
GNNs are vulnerable to adversarial attacks which can fool the GNN model to make desired predictions.
In this work, we study a novel problem of developing robust and membership privacy-preserving GNNs.
- Score: 43.11374582152925
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph Neural Networks (GNNs) have achieved great success in modeling
graph-structured data. However, recent works show that GNNs are vulnerable to
adversarial attacks which can fool the GNN model to make desired predictions of
the attacker. In addition, training data of GNNs can be leaked under membership
inference attacks. This largely hinders the adoption of GNNs in high-stake
domains such as e-commerce, finance and bioinformatics. Though investigations
have been made in conducting robust predictions and protecting membership
privacy, they generally fail to simultaneously consider the robustness and
membership privacy. Therefore, in this work, we study a novel problem of
developing robust and membership privacy-preserving GNNs. Our analysis shows
that Information Bottleneck (IB) can help filter out noisy information and
regularize the predictions on labeled samples, which can benefit robustness and
membership privacy. However, structural noises and lack of labels in node
classification challenge the deployment of IB on graph-structured data. To
mitigate these issues, we propose a novel graph information bottleneck
framework that can alleviate structural noises with neighbor bottleneck. Pseudo
labels are also incorporated in the optimization to minimize the gap between
the predictions on the labeled set and unlabeled set for membership privacy.
Extensive experiments on real-world datasets demonstrate that our method can
give robust predictions and simultaneously preserve membership privacy.
Related papers
- GNNBleed: Inference Attacks to Unveil Private Edges in Graphs with
Realistic Access to GNN Models [3.0509197593879844]
This paper investigates edge privacy in contexts where adversaries possess black-box GNN model access.
We introduce a series of privacy attacks grounded on the message-passing mechanism of GNNs.
arXiv Detail & Related papers (2023-11-03T20:26:03Z) - A Survey on Privacy in Graph Neural Networks: Attacks, Preservation, and
Applications [76.88662943995641]
Graph Neural Networks (GNNs) have gained significant attention owing to their ability to handle graph-structured data.
To address this issue, researchers have started to develop privacy-preserving GNNs.
Despite this progress, there is a lack of a comprehensive overview of the attacks and the techniques for preserving privacy in the graph domain.
arXiv Detail & Related papers (2023-08-31T00:31:08Z) - Uncertainty Quantification over Graph with Conformalized Graph Neural
Networks [52.20904874696597]
Graph Neural Networks (GNNs) are powerful machine learning prediction models on graph-structured data.
GNNs lack rigorous uncertainty estimates, limiting their reliable deployment in settings where the cost of errors is significant.
We propose conformalized GNN (CF-GNN), extending conformal prediction (CP) to graph-based models for guaranteed uncertainty estimates.
arXiv Detail & Related papers (2023-05-23T21:38:23Z) - Model Inversion Attacks against Graph Neural Networks [65.35955643325038]
We study model inversion attacks against Graph Neural Networks (GNNs)
In this paper, we present GraphMI to infer the private training graph data.
Our experimental results show that such defenses are not sufficiently effective and call for more advanced defenses against privacy attacks.
arXiv Detail & Related papers (2022-09-16T09:13:43Z) - Label-Only Membership Inference Attack against Node-Level Graph Neural
Networks [30.137860266059004]
Graph Neural Networks (GNNs) are vulnerable to Membership Inference Attacks (MIAs)
We propose a label-only MIA against GNNs for node classification with the help of GNNs' flexible prediction mechanism.
Our attacking method achieves around 60% accuracy, precision, and Area Under the Curve (AUC) for most datasets and GNN models.
arXiv Detail & Related papers (2022-07-27T19:46:26Z) - NetFense: Adversarial Defenses against Privacy Attacks on Neural
Networks for Graph Data [10.609715843964263]
We propose a novel research task, adversarial defenses against GNN-based privacy attacks.
We present a graph perturbation-based approach, NetFense, to achieve the goal.
arXiv Detail & Related papers (2021-06-22T15:32:50Z) - Uncertainty-Matching Graph Neural Networks to Defend Against Poisoning
Attacks [43.60973654460398]
Graph Neural Networks (GNNs) are generalizations of neural networks to graph-structured data.
GNNs are vulnerable to adversarial attacks, i.e., a small perturbation to the structure can lead to a non-trivial performance degradation.
We propose Uncertainty Matching GNN (UM-GNN), that is aimed at improving the robustness of GNN models.
arXiv Detail & Related papers (2020-09-30T05:29:42Z) - Information Obfuscation of Graph Neural Networks [96.8421624921384]
We study the problem of protecting sensitive attributes by information obfuscation when learning with graph structured data.
We propose a framework to locally filter out pre-determined sensitive attributes via adversarial training with the total variation and the Wasserstein distance.
arXiv Detail & Related papers (2020-09-28T17:55:04Z) - Graph Backdoor [53.70971502299977]
We present GTA, the first backdoor attack on graph neural networks (GNNs)
GTA departs in significant ways: it defines triggers as specific subgraphs, including both topological structures and descriptive features.
It can be instantiated for both transductive (e.g., node classification) and inductive (e.g., graph classification) tasks.
arXiv Detail & Related papers (2020-06-21T19:45:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.