GraphPub: Generation of Differential Privacy Graph with High
Availability
- URL: http://arxiv.org/abs/2403.00030v2
- Date: Tue, 5 Mar 2024 05:34:55 GMT
- Title: GraphPub: Generation of Differential Privacy Graph with High
Availability
- Authors: Wanghan Xu, Bin Shi, Ao Liu, Jiqiang Zhang, Bo Dong
- Abstract summary: Differential privacy (DP) is a common method to protect privacy on graph data.
Due to the complex topological structure of graph data, applying DP on graphs often affects the message passing and aggregation of GNN models.
We propose graph publisher (GraphPub), which can protect graph topology while ensuring the availability of data is basically unchanged.
- Score: 21.829551460549936
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, with the rapid development of graph neural networks (GNN),
more and more graph datasets have been published for GNN tasks. However, when
an upstream data owner publishes graph data, there are often many privacy
concerns, because many real-world graph data contain sensitive information like
person's friend list. Differential privacy (DP) is a common method to protect
privacy, but due to the complex topological structure of graph data, applying
DP on graphs often affects the message passing and aggregation of GNN models,
leading to a decrease in model accuracy. In this paper, we propose a novel
graph edge protection framework, graph publisher (GraphPub), which can protect
graph topology while ensuring that the availability of data is basically
unchanged. Through reverse learning and the encoder-decoder mechanism, we
search for some false edges that do not have a large negative impact on the
aggregation of node features, and use them to replace some real edges. The
modified graph will be published, which is difficult to distinguish between
real and false data. Sufficient experiments prove that our framework achieves
model accuracy close to the original graph with an extremely low privacy
budget.
Related papers
- Deep Manifold Graph Auto-Encoder for Attributed Graph Embedding [51.75091298017941]
This paper proposes a novel Deep Manifold (Variational) Graph Auto-Encoder (DMVGAE/DMGAE) for attributed graph data.
The proposed method surpasses state-of-the-art baseline algorithms by a significant margin on different downstream tasks across popular datasets.
arXiv Detail & Related papers (2024-01-12T17:57:07Z) - Unlearnable Graph: Protecting Graphs from Unauthorized Exploitation [68.59161853439339]
We propose a novel method for generating unlearnable graph examples.
By injecting delusive but imperceptible noise into graphs using our Error-Minimizing Structural Poisoning (EMinS) module, we are able to make the graphs unexploitable.
arXiv Detail & Related papers (2023-03-05T03:30:22Z) - Model Inversion Attacks against Graph Neural Networks [65.35955643325038]
We study model inversion attacks against Graph Neural Networks (GNNs)
In this paper, we present GraphMI to infer the private training graph data.
Our experimental results show that such defenses are not sufficiently effective and call for more advanced defenses against privacy attacks.
arXiv Detail & Related papers (2022-09-16T09:13:43Z) - Graph Generative Model for Benchmarking Graph Neural Networks [73.11514658000547]
We introduce a novel graph generative model that learns and reproduces the distribution of real-world graphs in a privacy-controlled way.
Our model can successfully generate privacy-controlled, synthetic substitutes of large-scale real-world graphs that can be effectively used to benchmark GNN models.
arXiv Detail & Related papers (2022-07-10T06:42:02Z) - Inference Attacks Against Graph Neural Networks [33.19531086886817]
Graph embedding is a powerful tool to solve the graph analytics problem.
While sharing graph embedding is intriguing, the associated privacy risks are unexplored.
We systematically investigate the information leakage of the graph embedding by mounting three inference attacks.
arXiv Detail & Related papers (2021-10-06T10:08:11Z) - Deep Fraud Detection on Non-attributed Graph [61.636677596161235]
Graph Neural Networks (GNNs) have shown solid performance on fraud detection.
labeled data is scarce in large-scale industrial problems, especially for fraud detection.
We propose a novel graph pre-training strategy to leverage more unlabeled data.
arXiv Detail & Related papers (2021-10-04T03:42:09Z) - GraphMI: Extracting Private Graph Data from Graph Neural Networks [59.05178231559796]
We present textbfGraph textbfModel textbfInversion attack (GraphMI), which aims to extract private graph data of the training graph by inverting GNN.
Specifically, we propose a projected gradient module to tackle the discreteness of graph edges while preserving the sparsity and smoothness of graph features.
We design a graph auto-encoder module to efficiently exploit graph topology, node attributes, and target model parameters for edge inference.
arXiv Detail & Related papers (2021-06-05T07:07:52Z) - Adversarial Privacy Preserving Graph Embedding against Inference Attack [9.90348608491218]
Graph embedding has been proved extremely useful to learn low-dimensional feature representations from graph structured data.
Existing graph embedding methods do not consider users' privacy to prevent inference attacks.
We propose Adrial Privacy Graph Embedding (APGE), a graph adversarial training framework that integrates the disentangling and purging mechanisms to remove users' private information from learned node representations.
arXiv Detail & Related papers (2020-08-30T00:06:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.