A Unified View of Differentially Private Deep Generative Modeling
- URL: http://arxiv.org/abs/2309.15696v1
- Date: Wed, 27 Sep 2023 14:38:16 GMT
- Title: A Unified View of Differentially Private Deep Generative Modeling
- Authors: Dingfan Chen, Raouf Kerkouche, Mario Fritz
- Abstract summary: Data with privacy concerns comes with stringent regulations that frequently prohibited data access and data sharing.
Overcoming these obstacles is key for technological progress in many real-world application scenarios that involve privacy sensitive data.
Differentially private (DP) data publishing provides a compelling solution, where only a sanitized form of the data is publicly released.
- Score: 60.72161965018005
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The availability of rich and vast data sources has greatly advanced machine
learning applications in various domains. However, data with privacy concerns
comes with stringent regulations that frequently prohibited data access and
data sharing. Overcoming these obstacles in compliance with privacy
considerations is key for technological progress in many real-world application
scenarios that involve privacy sensitive data. Differentially private (DP) data
publishing provides a compelling solution, where only a sanitized form of the
data is publicly released, enabling privacy-preserving downstream analysis and
reproducible research in sensitive domains. In recent years, various approaches
have been proposed for achieving privacy-preserving high-dimensional data
generation by private training on top of deep neural networks. In this paper,
we present a novel unified view that systematizes these approaches. Our view
provides a joint design space for systematically deriving methods that cater to
different use cases. We then discuss the strengths, limitations, and inherent
correlations between different approaches, aiming to shed light on crucial
aspects and inspire future research. We conclude by presenting potential paths
forward for the field of DP data generation, with the aim of steering the
community toward making the next important steps in advancing
privacy-preserving learning.
Related papers
- Privacy in Fine-tuning Large Language Models: Attacks, Defenses, and Future Directions [11.338466798715906]
Fine-tuning Large Language Models (LLMs) can achieve state-of-the-art performance across various domains.
This paper provides a comprehensive survey of privacy challenges associated with fine-tuning LLMs.
We highlight vulnerabilities to various privacy attacks, including membership inference, data extraction, and backdoor attacks.
arXiv Detail & Related papers (2024-12-21T06:41:29Z) - Privacy-Preserving Large Language Models: Mechanisms, Applications, and Future Directions [0.0]
This survey explores the landscape of privacy-preserving mechanisms tailored for large language models.
We examine their efficacy in addressing key privacy challenges, such as membership inference and model inversion attacks.
By synthesizing state-of-the-art approaches and future trends, this paper provides a foundation for developing robust, privacy-preserving large language models.
arXiv Detail & Related papers (2024-12-09T00:24:09Z) - Model Inversion Attacks: A Survey of Approaches and Countermeasures [59.986922963781]
Recently, a new type of privacy attack, the model inversion attacks (MIAs), aims to extract sensitive features of private data for training.
Despite the significance, there is a lack of systematic studies that provide a comprehensive overview and deeper insights into MIAs.
This survey aims to summarize up-to-date MIA methods in both attacks and defenses.
arXiv Detail & Related papers (2024-11-15T08:09:28Z) - Tabular Data Synthesis with Differential Privacy: A Survey [24.500349285858597]
Data sharing is a prerequisite for collaborative innovation, enabling organizations to leverage diverse datasets for deeper insights.
Data synthesis tackles this by generating artificial datasets that preserve the statistical characteristics of real data.
Differentially private data synthesis has emerged as a promising approach to privacy-aware data sharing.
arXiv Detail & Related papers (2024-11-04T06:32:48Z) - Preserving Privacy in Large Language Models: A Survey on Current Threats and Solutions [12.451936012379319]
Large Language Models (LLMs) represent a significant advancement in artificial intelligence, finding applications across various domains.
Their reliance on massive internet-sourced datasets for training brings notable privacy issues.
Certain application-specific scenarios may require fine-tuning these models on private data.
arXiv Detail & Related papers (2024-08-10T05:41:19Z) - PrivacyMind: Large Language Models Can Be Contextual Privacy Protection Learners [81.571305826793]
We introduce Contextual Privacy Protection Language Models (PrivacyMind)
Our work offers a theoretical analysis for model design and benchmarks various techniques.
In particular, instruction tuning with both positive and negative examples stands out as a promising method.
arXiv Detail & Related papers (2023-10-03T22:37:01Z) - Recent Advances of Differential Privacy in Centralized Deep Learning: A
Systematic Survey [1.89915151018241]
Differential Privacy has become a widely popular method for data protection in machine learning.
This survey provides an overview of the state-of-the-art of differentially private centralized deep learning.
arXiv Detail & Related papers (2023-09-28T12:44:59Z) - Privacy-Preserving Graph Machine Learning from Data to Computation: A
Survey [67.7834898542701]
We focus on reviewing privacy-preserving techniques of graph machine learning.
We first review methods for generating privacy-preserving graph data.
Then we describe methods for transmitting privacy-preserved information.
arXiv Detail & Related papers (2023-07-10T04:30:23Z) - Private Set Generation with Discriminative Information [63.851085173614]
Differentially private data generation is a promising solution to the data privacy challenge.
Existing private generative models are struggling with the utility of synthetic samples.
We introduce a simple yet effective method that greatly improves the sample utility of state-of-the-art approaches.
arXiv Detail & Related papers (2022-11-07T10:02:55Z) - GS-WGAN: A Gradient-Sanitized Approach for Learning Differentially
Private Generators [74.16405337436213]
We propose Gradient-sanitized Wasserstein Generative Adrial Networks (GS-WGAN)
GS-WGAN allows releasing a sanitized form of sensitive data with rigorous privacy guarantees.
We find our approach consistently outperforms state-of-the-art approaches across multiple metrics.
arXiv Detail & Related papers (2020-06-15T10:01:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.