Unlearnable Graph: Protecting Graphs from Unauthorized Exploitation
- URL: http://arxiv.org/abs/2303.02568v1
- Date: Sun, 5 Mar 2023 03:30:22 GMT
- Title: Unlearnable Graph: Protecting Graphs from Unauthorized Exploitation
- Authors: Yixin Liu, Chenrui Fan, Pan Zhou and Lichao Sun
- Abstract summary: We propose a novel method for generating unlearnable graph examples.
By injecting delusive but imperceptible noise into graphs using our Error-Minimizing Structural Poisoning (EMinS) module, we are able to make the graphs unexploitable.
- Score: 68.59161853439339
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While the use of graph-structured data in various fields is becoming
increasingly popular, it also raises concerns about the potential unauthorized
exploitation of personal data for training commercial graph neural network
(GNN) models, which can compromise privacy. To address this issue, we propose a
novel method for generating unlearnable graph examples. By injecting delusive
but imperceptible noise into graphs using our Error-Minimizing Structural
Poisoning (EMinS) module, we are able to make the graphs unexploitable.
Notably, by modifying only $5\%$ at most of the potential edges in the graph
data, our method successfully decreases the accuracy from ${77.33\%}$ to
${42.47\%}$ on the COLLAB dataset.
Related papers
- Graph Unlearning with Efficient Partial Retraining [28.433619085748447]
Graph Neural Networks (GNNs) have achieved remarkable success in various real-world applications.
GNNs may be trained on undesirable graph data, which can degrade their performance and reliability.
We propose GraphRevoker, a novel graph unlearning framework that better maintains the model utility of unlearnable GNNs.
arXiv Detail & Related papers (2024-03-12T06:22:10Z) - GraphPub: Generation of Differential Privacy Graph with High
Availability [21.829551460549936]
Differential privacy (DP) is a common method to protect privacy on graph data.
Due to the complex topological structure of graph data, applying DP on graphs often affects the message passing and aggregation of GNN models.
We propose graph publisher (GraphPub), which can protect graph topology while ensuring the availability of data is basically unchanged.
arXiv Detail & Related papers (2024-02-28T20:02:55Z) - GraphGuard: Detecting and Counteracting Training Data Misuse in Graph
Neural Networks [69.97213941893351]
The emergence of Graph Neural Networks (GNNs) in graph data analysis has raised critical concerns about data misuse during model training.
Existing methodologies address either data misuse detection or mitigation, and are primarily designed for local GNN models.
This paper introduces a pioneering approach called GraphGuard, to tackle these challenges.
arXiv Detail & Related papers (2023-12-13T02:59:37Z) - Model Inversion Attacks against Graph Neural Networks [65.35955643325038]
We study model inversion attacks against Graph Neural Networks (GNNs)
In this paper, we present GraphMI to infer the private training graph data.
Our experimental results show that such defenses are not sufficiently effective and call for more advanced defenses against privacy attacks.
arXiv Detail & Related papers (2022-09-16T09:13:43Z) - Neural Graph Matching for Pre-training Graph Neural Networks [72.32801428070749]
Graph neural networks (GNNs) have been shown powerful capacity at modeling structural data.
We present a novel Graph Matching based GNN Pre-Training framework, called GMPT.
The proposed method can be applied to fully self-supervised pre-training and coarse-grained supervised pre-training.
arXiv Detail & Related papers (2022-03-03T09:53:53Z) - Inference Attacks Against Graph Neural Networks [33.19531086886817]
Graph embedding is a powerful tool to solve the graph analytics problem.
While sharing graph embedding is intriguing, the associated privacy risks are unexplored.
We systematically investigate the information leakage of the graph embedding by mounting three inference attacks.
arXiv Detail & Related papers (2021-10-06T10:08:11Z) - Deep Fraud Detection on Non-attributed Graph [61.636677596161235]
Graph Neural Networks (GNNs) have shown solid performance on fraud detection.
labeled data is scarce in large-scale industrial problems, especially for fraud detection.
We propose a novel graph pre-training strategy to leverage more unlabeled data.
arXiv Detail & Related papers (2021-10-04T03:42:09Z) - GraphMI: Extracting Private Graph Data from Graph Neural Networks [59.05178231559796]
We present textbfGraph textbfModel textbfInversion attack (GraphMI), which aims to extract private graph data of the training graph by inverting GNN.
Specifically, we propose a projected gradient module to tackle the discreteness of graph edges while preserving the sparsity and smoothness of graph features.
We design a graph auto-encoder module to efficiently exploit graph topology, node attributes, and target model parameters for edge inference.
arXiv Detail & Related papers (2021-06-05T07:07:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.