GCON: Differentially Private Graph Convolutional Network via Objective Perturbation
- URL: http://arxiv.org/abs/2407.05034v1
- Date: Sat, 6 Jul 2024 09:59:56 GMT
- Title: GCON: Differentially Private Graph Convolutional Network via Objective Perturbation
- Authors: Jianxin Wei, Yizheng Zhu, Xiaokui Xiao, Ergute Bao, Yin Yang, Kuntai Cai, Beng Chin Ooi,
- Abstract summary: Graph Convolutional Networks (GCNs) are a popular machine learning model with a wide range of applications in graph analytics.
When the underlying graph data contains sensitive information such as interpersonal relationships, a GCN trained without privacy-protection measures could be exploited to extract private data.
We propose GCON, a novel and effective solution for training GCNs with edge differential privacy.
- Score: 27.279817693305183
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph Convolutional Networks (GCNs) are a popular machine learning model with a wide range of applications in graph analytics, including healthcare, transportation, and finance. Similar to other neural networks, a GCN may memorize parts of the training data through its model weights. Thus, when the underlying graph data contains sensitive information such as interpersonal relationships, a GCN trained without privacy-protection measures could be exploited to extract private data, leading to potential violations of privacy regulations such as GDPR. To defend against such attacks, a promising approach is to train the GCN with differential privacy (DP), which is a rigorous framework that provides strong privacy protection by injecting random noise into the trained model weights. However, training a large graph neural network under DP is a highly challenging task. Existing solutions either introduce random perturbations in the graph topology, which leads to severe distortions of the network's message passing, or inject randomness into each neighborhood aggregation operation, which leads to a high noise scale when the GCN performs multiple levels of aggregations. Motivated by this, we propose GCON, a novel and effective solution for training GCNs with edge differential privacy. The main idea is to (i) convert the GCN training process into a convex optimization problem, and then (ii) apply the classic idea of perturbing the objective function to satisfy DP. Extensive experiments using multiple benchmark datasets demonstrate GCON's consistent and superior performance over existing solutions in a wide variety of settings.
Related papers
- Prompt-based Unifying Inference Attack on Graph Neural Networks [24.85661326294946]
We propose a novel Prompt-based unifying Inference Attack framework on Graph neural networks (GNNs)
ProIA retains the crucial topological information of the graph during pre-training, enhancing the background knowledge of the inference attack model.
It then utilizes a unified prompt and introduces additional disentanglement factors in downstream attacks to adapt to task-relevant knowledge.
arXiv Detail & Related papers (2024-12-20T09:56:17Z) - Privacy-preserving design of graph neural networks with applications to
vertical federated learning [56.74455367682945]
We present an end-to-end graph representation learning framework called VESPER.
VESPER is capable of training high-performance GNN models over both sparse and dense graphs under reasonable privacy budgets.
arXiv Detail & Related papers (2023-10-31T15:34:59Z) - Independent Distribution Regularization for Private Graph Embedding [55.24441467292359]
Graph embeddings are susceptible to attribute inference attacks, which allow attackers to infer private node attributes from the learned graph embeddings.
To address these concerns, privacy-preserving graph embedding methods have emerged.
We propose a novel approach called Private Variational Graph AutoEncoders (PVGAE) with the aid of independent distribution penalty as a regularization term.
arXiv Detail & Related papers (2023-08-16T13:32:43Z) - Differentially Private Decoupled Graph Convolutions for Multigranular
Topology Protection [38.96828804683783]
GNNs can inadvertently expose sensitive user information and interactions through their model predictions.
Applying standard DP approaches to GNNs directly is not advisable due to two main reasons.
We propose a new framework termed Graph Differential Privacy (GDP), specifically tailored to graph learning.
arXiv Detail & Related papers (2023-07-12T19:29:06Z) - ProGAP: Progressive Graph Neural Networks with Differential Privacy
Guarantees [8.79398901328539]
Graph Neural Networks (GNNs) have become a popular tool for learning on graphs, but their widespread use raises privacy concerns.
We propose a new differentially private GNN called ProGAP that uses a progressive training scheme to improve such accuracy-privacy trade-offs.
arXiv Detail & Related papers (2023-04-18T12:08:41Z) - Privacy-Preserved Neural Graph Similarity Learning [99.78599103903777]
We propose a novel Privacy-Preserving neural Graph Matching network model, named PPGM, for graph similarity learning.
To prevent reconstruction attacks, the proposed model does not communicate node-level representations between devices.
To alleviate the attacks to graph properties, the obfuscated features that contain information from both vectors are communicated.
arXiv Detail & Related papers (2022-10-21T04:38:25Z) - Heterogeneous Graph Neural Network for Privacy-Preserving Recommendation [25.95411320126426]
Social networks are considered to be heterogeneous graph neural networks (HGNNs) with deep learning technological advances.
We propose a novel heterogeneous graph neural network privacy-preserving method based on a differential privacy mechanism named HeteDP.
arXiv Detail & Related papers (2022-10-02T14:41:02Z) - Model Inversion Attacks against Graph Neural Networks [65.35955643325038]
We study model inversion attacks against Graph Neural Networks (GNNs)
In this paper, we present GraphMI to infer the private training graph data.
Our experimental results show that such defenses are not sufficiently effective and call for more advanced defenses against privacy attacks.
arXiv Detail & Related papers (2022-09-16T09:13:43Z) - GAP: Differentially Private Graph Neural Networks with Aggregation
Perturbation [19.247325210343035]
Graph Neural Networks (GNNs) are powerful models designed for graph data that learn node representation.
Recent studies have shown that GNNs can raise significant privacy concerns when graph data contain sensitive information.
We propose GAP, a novel differentially private GNN that safeguards privacy of nodes and edges.
arXiv Detail & Related papers (2022-03-02T08:58:07Z) - GraphMI: Extracting Private Graph Data from Graph Neural Networks [59.05178231559796]
We present textbfGraph textbfModel textbfInversion attack (GraphMI), which aims to extract private graph data of the training graph by inverting GNN.
Specifically, we propose a projected gradient module to tackle the discreteness of graph edges while preserving the sparsity and smoothness of graph features.
We design a graph auto-encoder module to efficiently exploit graph topology, node attributes, and target model parameters for edge inference.
arXiv Detail & Related papers (2021-06-05T07:07:52Z) - Graph Ordering: Towards the Optimal by Learning [69.72656588714155]
Graph representation learning has achieved a remarkable success in many graph-based applications, such as node classification, prediction, and community detection.
However, for some kind of graph applications, such as graph compression and edge partition, it is very hard to reduce them to some graph representation learning tasks.
In this paper, we propose to attack the graph ordering problem behind such applications by a novel learning approach.
arXiv Detail & Related papers (2020-01-18T09:14:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.