DPlis: Boosting Utility of Differentially Private Deep Learning via
Randomized Smoothing
- URL: http://arxiv.org/abs/2103.01496v1
- Date: Tue, 2 Mar 2021 06:33:14 GMT
- Title: DPlis: Boosting Utility of Differentially Private Deep Learning via
Randomized Smoothing
- Authors: Wenxiao Wang (1), Tianhao Wang (2), Lun Wang (3), Nanqing Luo (4), Pan
Zhou (4), Dawn Song (3), Ruoxi Jia (5) ((1) Tsinghua University, (2) Harvard
University, (3) University of California, Berkeley, (4) Huazhong University
of Science and Technology, (5) Virginia Tech)
- Abstract summary: We propose DPlis--Differentially Private Learning wIth Smoothing.
We show that DPlis can effectively boost model quality and training stability under a given privacy budget.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning techniques have achieved remarkable performance in wide-ranging
tasks. However, when trained on privacy-sensitive datasets, the model
parameters may expose private information in training data. Prior attempts for
differentially private training, although offering rigorous privacy guarantees,
lead to much lower model performance than the non-private ones. Besides,
different runs of the same training algorithm produce models with large
performance variance. To address these issues, we propose DPlis--Differentially
Private Learning wIth Smoothing. The core idea of DPlis is to construct a
smooth loss function that favors noise-resilient models lying in large flat
regions of the loss landscape. We provide theoretical justification for the
utility improvements of DPlis. Extensive experiments also demonstrate that
DPlis can effectively boost model quality and training stability under a given
privacy budget.
Related papers
- Private Fine-tuning of Large Language Models with Zeroth-order
Optimization [54.24600476755372]
We introduce DP-ZO, a new method for fine-tuning large language models that preserves the privacy of training data by privatizing zeroth-order optimization.
We show that DP-ZO exhibits just $1.86%$ performance degradation due to privacy at $ (1,10-5)$-DP when fine-tuning OPT-66B on 1000 training samples from SQuAD.
arXiv Detail & Related papers (2024-01-09T03:53:59Z) - Sparsity-Preserving Differentially Private Training of Large Embedding
Models [67.29926605156788]
DP-SGD is a training algorithm that combines differential privacy with gradient descent.
Applying DP-SGD naively to embedding models can destroy gradient sparsity, leading to reduced training efficiency.
We present two new algorithms, DP-FEST and DP-AdaFEST, that preserve gradient sparsity during private training of large embedding models.
arXiv Detail & Related papers (2023-11-14T17:59:51Z) - Differentially Private Sharpness-Aware Training [5.488902352630076]
Training deep learning models with differential privacy (DP) results in a degradation of performance.
We show that flat minima can help reduce the negative effects of per-example gradient clipping.
We propose a new sharpness-aware training method that mitigates the privacy-optimization trade-off.
arXiv Detail & Related papers (2023-06-09T03:37:27Z) - Learning Differentially Private Probabilistic Models for
Privacy-Preserving Image Generation [67.47979276739144]
We propose learning differentially private probabilistic models to generate high-resolution images with differential privacy guarantee.
Our approach can generate images up to 256x256 with remarkable visual quality and data utility.
arXiv Detail & Related papers (2023-05-18T02:51:17Z) - Large Scale Transfer Learning for Differentially Private Image
Classification [51.10365553035979]
Differential Privacy (DP) provides a formal framework for training machine learning models with individual example level privacy.
Private training using DP-SGD protects against leakage by injecting noise into individual example gradients.
While this result is quite appealing, the computational cost of training large-scale models with DP-SGD is substantially higher than non-private training.
arXiv Detail & Related papers (2022-05-06T01:22:20Z) - Don't Generate Me: Training Differentially Private Generative Models
with Sinkhorn Divergence [73.14373832423156]
We propose DP-Sinkhorn, a novel optimal transport-based generative method for learning data distributions from private data with differential privacy.
Unlike existing approaches for training differentially private generative models, we do not rely on adversarial objectives.
arXiv Detail & Related papers (2021-11-01T18:10:21Z) - Large Language Models Can Be Strong Differentially Private Learners [70.0317718115406]
Differentially Private (DP) learning has seen limited success for building large deep learning models of text.
We show that this performance drop can be mitigated with the use of large pretrained models.
We propose a memory saving technique that allows clipping in DP-SGD to run without instantiating per-example gradients.
arXiv Detail & Related papers (2021-10-12T01:45:27Z) - An Efficient DP-SGD Mechanism for Large Scale NLP Models [28.180412581994485]
Data used to train Natural Language Understanding (NLU) models may contain private information such as addresses or phone numbers.
It is desirable that underlying models do not expose private information contained in the training data.
Differentially Private Gradient Descent (DP-SGD) has been proposed as a mechanism to build privacy-preserving models.
arXiv Detail & Related papers (2021-07-14T15:23:27Z) - Improving Deep Learning with Differential Privacy using Gradient
Encoding and Denoising [36.935465903971014]
In this paper, we aim at training deep learning models with differential privacy guarantees.
Our key technique is to encode gradients to map them to a smaller vector space.
We show that our mechanism outperforms the state-of-the-art DPSGD.
arXiv Detail & Related papers (2020-07-22T16:33:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.