Learning Differentially Private Probabilistic Models for
Privacy-Preserving Image Generation
- URL: http://arxiv.org/abs/2305.10662v1
- Date: Thu, 18 May 2023 02:51:17 GMT
- Title: Learning Differentially Private Probabilistic Models for
Privacy-Preserving Image Generation
- Authors: Bochao Liu, Shiming Ge, Pengju Wang, Liansheng Zhuang and Tongliang
Liu
- Abstract summary: We propose learning differentially private probabilistic models to generate high-resolution images with differential privacy guarantee.
Our approach can generate images up to 256x256 with remarkable visual quality and data utility.
- Score: 67.47979276739144
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A number of deep models trained on high-quality and valuable images have been
deployed in practical applications, which may pose a leakage risk of data
privacy. Learning differentially private generative models can sidestep this
challenge through indirect data access. However, such differentially private
generative models learned by existing approaches can only generate images with
a low-resolution of less than 128x128, hindering the widespread usage of
generated images in downstream training. In this work, we propose learning
differentially private probabilistic models (DPPM) to generate high-resolution
images with differential privacy guarantee. In particular, we first train a
model to fit the distribution of the training data and make it satisfy
differential privacy by performing a randomized response mechanism during
training process. Then we perform Hamiltonian dynamics sampling along with the
differentially private movement direction predicted by the trained
probabilistic model to obtain the privacy-preserving images. In this way, it is
possible to apply these images to different downstream tasks while protecting
private information. Notably, compared to other state-of-the-art differentially
private generative approaches, our approach can generate images up to 256x256
with remarkable visual quality and data utility. Extensive experiments show the
effectiveness of our approach.
Related papers
- Differentially Private Gradient Flow based on the Sliced Wasserstein
Distance for Non-Parametric Generative Modeling [61.65137699747604]
We introduce a novel differentially private generative modeling approach based on parameter-free gradient flows in the space of probability measures.
Our experiments show that compared to a generator-based model, our proposed model can generate higher-fidelity data at a low privacy budget.
arXiv Detail & Related papers (2023-12-13T15:47:30Z) - Differentially Private Synthetic Data Generation via
Lipschitz-Regularised Variational Autoencoders [3.7463972693041274]
It is often overlooked that generative models are prone to memorising many details of individual training records.
In this paper we explore an alternative approach for privately generating data that makes direct use of the inherentity in generative models.
arXiv Detail & Related papers (2023-04-22T07:24:56Z) - Extracting Training Data from Diffusion Models [77.11719063152027]
We show that diffusion models memorize individual images from their training data and emit them at generation time.
With a generate-and-filter pipeline, we extract over a thousand training examples from state-of-the-art models.
We train hundreds of diffusion models in various settings to analyze how different modeling and data decisions affect privacy.
arXiv Detail & Related papers (2023-01-30T18:53:09Z) - Private Set Generation with Discriminative Information [63.851085173614]
Differentially private data generation is a promising solution to the data privacy challenge.
Existing private generative models are struggling with the utility of synthetic samples.
We introduce a simple yet effective method that greatly improves the sample utility of state-of-the-art approaches.
arXiv Detail & Related papers (2022-11-07T10:02:55Z) - On the utility and protection of optimization with differential privacy
and classic regularization techniques [9.413131350284083]
We study the effectiveness of the differentially-private descent (DP-SGD) algorithm against standard optimization practices with regularization techniques.
We discuss differential privacy's flaws and limits and empirically demonstrate the often superior privacy-preserving properties of dropout and l2-regularization.
arXiv Detail & Related papers (2022-09-07T14:10:21Z) - Privacy Enhancement for Cloud-Based Few-Shot Learning [4.1579007112499315]
We study the privacy enhancement for the few-shot learning in an untrusted environment, e.g., the cloud.
We propose a method that learns privacy-preserved representation through the joint loss.
The empirical results show how privacy-performance trade-off can be negotiated for privacy-enhanced few-shot learning.
arXiv Detail & Related papers (2022-05-10T18:48:13Z) - Mixed Differential Privacy in Computer Vision [133.68363478737058]
AdaMix is an adaptive differentially private algorithm for training deep neural network classifiers using both private and public image data.
A few-shot or even zero-shot learning baseline that ignores private data can outperform fine-tuning on a large private dataset.
arXiv Detail & Related papers (2022-03-22T06:15:43Z) - Don't Generate Me: Training Differentially Private Generative Models
with Sinkhorn Divergence [73.14373832423156]
We propose DP-Sinkhorn, a novel optimal transport-based generative method for learning data distributions from private data with differential privacy.
Unlike existing approaches for training differentially private generative models, we do not rely on adversarial objectives.
arXiv Detail & Related papers (2021-11-01T18:10:21Z) - DPlis: Boosting Utility of Differentially Private Deep Learning via
Randomized Smoothing [0.0]
We propose DPlis--Differentially Private Learning wIth Smoothing.
We show that DPlis can effectively boost model quality and training stability under a given privacy budget.
arXiv Detail & Related papers (2021-03-02T06:33:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.