Privacy in Deep Learning: A Survey
- URL: http://arxiv.org/abs/2004.12254v5
- Date: Sat, 7 Nov 2020 01:52:13 GMT
- Title: Privacy in Deep Learning: A Survey
- Authors: Fatemehsadat Mireshghallah, Mohammadkazem Taram, Praneeth Vepakomma,
Abhishek Singh, Ramesh Raskar, Hadi Esmaeilzadeh
- Abstract summary: The ever-growing advances of deep learning in many areas have led to the adoption of Deep Neural Networks (DNNs) in production systems.
The availability of large datasets and high computational power are the main contributors to these advances.
This poses serious privacy concerns as this data can be misused or leaked through various vulnerabilities.
- Score: 16.278779275923448
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The ever-growing advances of deep learning in many areas including vision,
recommendation systems, natural language processing, etc., have led to the
adoption of Deep Neural Networks (DNNs) in production systems. The availability
of large datasets and high computational power are the main contributors to
these advances. The datasets are usually crowdsourced and may contain sensitive
information. This poses serious privacy concerns as this data can be misused or
leaked through various vulnerabilities. Even if the cloud provider and the
communication link is trusted, there are still threats of inference attacks
where an attacker could speculate properties of the data used for training, or
find the underlying model architecture and parameters. In this survey, we
review the privacy concerns brought by deep learning, and the mitigating
techniques introduced to tackle these issues. We also show that there is a gap
in the literature regarding test-time inference privacy, and propose possible
future research directions.
Related papers
- Model Inversion Attacks: A Survey of Approaches and Countermeasures [59.986922963781]
Recently, a new type of privacy attack, the model inversion attacks (MIAs), aims to extract sensitive features of private data for training.
Despite the significance, there is a lack of systematic studies that provide a comprehensive overview and deeper insights into MIAs.
This survey aims to summarize up-to-date MIA methods in both attacks and defenses.
arXiv Detail & Related papers (2024-11-15T08:09:28Z) - Preserving Privacy in Large Language Models: A Survey on Current Threats and Solutions [12.451936012379319]
Large Language Models (LLMs) represent a significant advancement in artificial intelligence, finding applications across various domains.
Their reliance on massive internet-sourced datasets for training brings notable privacy issues.
Certain application-specific scenarios may require fine-tuning these models on private data.
arXiv Detail & Related papers (2024-08-10T05:41:19Z) - Federated Learning Privacy: Attacks, Defenses, Applications, and Policy Landscape - A Survey [27.859861825159342]
Deep learning has shown incredible potential across a vast array of tasks.
Recent concerns on privacy have further highlighted challenges for accessing such data.
Federated learning has emerged as an important privacy-preserving technology.
arXiv Detail & Related papers (2024-05-06T16:55:20Z) - PrivacyMind: Large Language Models Can Be Contextual Privacy Protection Learners [81.571305826793]
We introduce Contextual Privacy Protection Language Models (PrivacyMind)
Our work offers a theoretical analysis for model design and benchmarks various techniques.
In particular, instruction tuning with both positive and negative examples stands out as a promising method.
arXiv Detail & Related papers (2023-10-03T22:37:01Z) - A Unified View of Differentially Private Deep Generative Modeling [60.72161965018005]
Data with privacy concerns comes with stringent regulations that frequently prohibited data access and data sharing.
Overcoming these obstacles is key for technological progress in many real-world application scenarios that involve privacy sensitive data.
Differentially private (DP) data publishing provides a compelling solution, where only a sanitized form of the data is publicly released.
arXiv Detail & Related papers (2023-09-27T14:38:16Z) - A Survey on Privacy in Graph Neural Networks: Attacks, Preservation, and
Applications [76.88662943995641]
Graph Neural Networks (GNNs) have gained significant attention owing to their ability to handle graph-structured data.
To address this issue, researchers have started to develop privacy-preserving GNNs.
Despite this progress, there is a lack of a comprehensive overview of the attacks and the techniques for preserving privacy in the graph domain.
arXiv Detail & Related papers (2023-08-31T00:31:08Z) - Position: Considerations for Differentially Private Learning with Large-Scale Public Pretraining [75.25943383604266]
We question whether the use of large Web-scraped datasets should be viewed as differential-privacy-preserving.
We caution that publicizing these models pretrained on Web data as "private" could lead to harm and erode the public's trust in differential privacy as a meaningful definition of privacy.
We conclude by discussing potential paths forward for the field of private learning, as public pretraining becomes more popular and powerful.
arXiv Detail & Related papers (2022-12-13T10:41:12Z) - Survey: Leakage and Privacy at Inference Time [59.957056214792665]
Leakage of data from publicly available Machine Learning (ML) models is an area of growing significance.
We focus on inference-time leakage, as the most likely scenario for publicly available models.
We propose a taxonomy across involuntary and malevolent leakage, available defences, followed by the currently available assessment metrics and applications.
arXiv Detail & Related papers (2021-07-04T12:59:16Z) - Security and Privacy Preserving Deep Learning [2.322461721824713]
Massive data collection required for deep learning presents obvious privacy issues.
Users personal, highly sensitive data such as photos and voice recordings are kept indefinitely by the companies that collect it.
Deep neural networks are susceptible to various inference attacks as they remember information about their training data.
arXiv Detail & Related papers (2020-06-23T01:53:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.