Privacy-Preserving Graph Convolutional Networks for Text Classification
- URL: http://arxiv.org/abs/2102.09604v1
- Date: Wed, 10 Feb 2021 15:27:38 GMT
- Title: Privacy-Preserving Graph Convolutional Networks for Text Classification
- Authors: Timour Igamberdiev and Ivan Habernal
- Abstract summary: Graphal convolution networks (GCNs) are a powerful architecture for representation learning and making predictions on documents that naturally occur as graphs.
Data containing sensitive personal information, such as documents with people's profiles or relationships as edges, are prone to privacy leaks from GCNs.
We show that privacy-preserving GCNs perform up to 90% of their non-private variants, while formally guaranteeing strong privacy measures.
- Score: 3.5503507997334958
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Graph convolutional networks (GCNs) are a powerful architecture for
representation learning and making predictions on documents that naturally
occur as graphs, e.g., citation or social networks. Data containing sensitive
personal information, such as documents with people's profiles or relationships
as edges, are prone to privacy leaks from GCNs, as an adversary might reveal
the original input from the trained model. Although differential privacy (DP)
offers a well-founded privacy-preserving framework, GCNs pose theoretical and
practical challenges due to their training specifics. We address these
challenges by adapting differentially-private gradient-based training to GCNs.
We investigate the impact of various privacy budgets, dataset sizes, and two
optimizers in an experimental setup over five NLP datasets in two languages. We
show that, under certain modeling choices, privacy-preserving GCNs perform up
to 90% of their non-private variants, while formally guaranteeing strong
privacy measures.
Related papers
- GCON: Differentially Private Graph Convolutional Network via Objective Perturbation [27.279817693305183]
Graph Convolutional Networks (GCNs) are a popular machine learning model with a wide range of applications in graph analytics.
When the underlying graph data contains sensitive information such as interpersonal relationships, a GCN trained without privacy-protection measures could be exploited to extract private data.
We propose GCON, a novel and effective solution for training GCNs with edge differential privacy.
arXiv Detail & Related papers (2024-07-06T09:59:56Z) - Federated Learning Empowered by Generative Content [55.576885852501775]
Federated learning (FL) enables leveraging distributed private data for model training in a privacy-preserving way.
We propose a novel FL framework termed FedGC, designed to mitigate data heterogeneity issues by diversifying private data with generative content.
We conduct a systematic empirical study on FedGC, covering diverse baselines, datasets, scenarios, and modalities.
arXiv Detail & Related papers (2023-12-10T07:38:56Z) - Privacy-Preserving Graph Embedding based on Local Differential Privacy [26.164722283887333]
We introduce a novel privacy-preserving graph embedding framework, named PrivGE, to protect node data privacy.
Specifically, we propose an LDP mechanism to obfuscate node data and utilize personalized PageRank as the proximity measure to learn node representations.
Experiments on several real-world graph datasets demonstrate that PrivGE achieves an optimal balance between privacy and utility.
arXiv Detail & Related papers (2023-10-17T08:06:08Z) - PrivacyMind: Large Language Models Can Be Contextual Privacy Protection Learners [81.571305826793]
We introduce Contextual Privacy Protection Language Models (PrivacyMind)
Our work offers a theoretical analysis for model design and benchmarks various techniques.
In particular, instruction tuning with both positive and negative examples stands out as a promising method.
arXiv Detail & Related papers (2023-10-03T22:37:01Z) - Independent Distribution Regularization for Private Graph Embedding [55.24441467292359]
Graph embeddings are susceptible to attribute inference attacks, which allow attackers to infer private node attributes from the learned graph embeddings.
To address these concerns, privacy-preserving graph embedding methods have emerged.
We propose a novel approach called Private Variational Graph AutoEncoders (PVGAE) with the aid of independent distribution penalty as a regularization term.
arXiv Detail & Related papers (2023-08-16T13:32:43Z) - ProGAP: Progressive Graph Neural Networks with Differential Privacy
Guarantees [8.79398901328539]
Graph Neural Networks (GNNs) have become a popular tool for learning on graphs, but their widespread use raises privacy concerns.
We propose a new differentially private GNN called ProGAP that uses a progressive training scheme to improve such accuracy-privacy trade-offs.
arXiv Detail & Related papers (2023-04-18T12:08:41Z) - How Do Input Attributes Impact the Privacy Loss in Differential Privacy? [55.492422758737575]
We study the connection between the per-subject norm in DP neural networks and individual privacy loss.
We introduce a novel metric termed the Privacy Loss-Input Susceptibility (PLIS) which allows one to apportion the subject's privacy loss to their input attributes.
arXiv Detail & Related papers (2022-11-18T11:39:03Z) - Just Fine-tune Twice: Selective Differential Privacy for Large Language
Models [69.66654761324702]
We propose a simple yet effective just-fine-tune-twice privacy mechanism to achieve SDP for large Transformer-based language models.
Experiments show that our models achieve strong performance while staying robust to the canary insertion attack.
arXiv Detail & Related papers (2022-04-15T22:36:55Z) - GAP: Differentially Private Graph Neural Networks with Aggregation
Perturbation [19.247325210343035]
Graph Neural Networks (GNNs) are powerful models designed for graph data that learn node representation.
Recent studies have shown that GNNs can raise significant privacy concerns when graph data contain sensitive information.
We propose GAP, a novel differentially private GNN that safeguards privacy of nodes and edges.
arXiv Detail & Related papers (2022-03-02T08:58:07Z) - LinkTeller: Recovering Private Edges from Graph Neural Networks via
Influence Analysis [15.923158902023669]
We focus on the edge privacy, and consider a training scenario where Bob with node features will first send training node features to Alice who owns the adjacency information.
We first propose a privacy attack LinkTeller via influence analysis to infer the private edge information held by Alice.
We then empirically show that LinkTeller is able to recover a significant amount of private edges, outperforming existing baselines.
arXiv Detail & Related papers (2021-08-14T09:53:42Z) - Robustness Threats of Differential Privacy [70.818129585404]
We experimentally demonstrate that networks, trained with differential privacy, in some settings might be even more vulnerable in comparison to non-private versions.
We study how the main ingredients of differentially private neural networks training, such as gradient clipping and noise addition, affect the robustness of the model.
arXiv Detail & Related papers (2020-12-14T18:59:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.