A Differentially Private Framework for Deep Learning with Convexified
Loss Functions
- URL: http://arxiv.org/abs/2204.01049v1
- Date: Sun, 3 Apr 2022 11:10:05 GMT
- Title: A Differentially Private Framework for Deep Learning with Convexified
Loss Functions
- Authors: Zhigang Lu, Hassan Jameel Asghar, Mohamed Ali Kaafar, Darren Webb,
Peter Dickinson
- Abstract summary: Differential privacy (DP) has been applied in deep learning for preserving privacy of the underlying training sets.
Existing DP practice falls into three categories - objective perturbation, gradient perturbation and output perturbation.
We propose a novel output perturbation framework by injecting DP noise into a randomly sampled neuron.
- Score: 4.059849656394191
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Differential privacy (DP) has been applied in deep learning for preserving
privacy of the underlying training sets. Existing DP practice falls into three
categories - objective perturbation, gradient perturbation and output
perturbation. They suffer from three main problems. First, conditions on
objective functions limit objective perturbation in general deep learning
tasks. Second, gradient perturbation does not achieve a satisfactory
privacy-utility trade-off due to over-injected noise in each epoch. Third, high
utility of the output perturbation method is not guaranteed because of the
loose upper bound on the global sensitivity of the trained model parameters as
the noise scale parameter. To address these problems, we analyse a tighter
upper bound on the global sensitivity of the model parameters. Under a
black-box setting, based on this global sensitivity, to control the overall
noise injection, we propose a novel output perturbation framework by injecting
DP noise into a randomly sampled neuron (via the exponential mechanism) at the
output layer of a baseline non-private neural network trained with a
convexified loss function. We empirically compare the privacy-utility
trade-off, measured by accuracy loss to baseline non-private models and the
privacy leakage against black-box membership inference (MI) attacks, between
our framework and the open-source differentially private stochastic gradient
descent (DP-SGD) approaches on six commonly used real-world datasets. The
experimental evaluations show that, when the baseline models have observable
privacy leakage under MI attacks, our framework achieves a better
privacy-utility trade-off than existing DP-SGD implementations, given an
overall privacy budget $\epsilon \leq 1$ for a large number of queries.
Related papers
- Rethinking Improved Privacy-Utility Trade-off with Pre-existing Knowledge for DP Training [31.559864332056648]
We propose a generic differential privacy framework with heterogeneous noise (DP-Hero)
Atop DP-Hero, we instantiate a heterogeneous version of DP-SGD, where the noise injected into gradient updates is heterogeneous and guided by prior-established model parameters.
We conduct comprehensive experiments to verify and explain the effectiveness of the proposed DP-Hero, showing improved training accuracy compared with state-of-the-art works.
arXiv Detail & Related papers (2024-09-05T08:40:54Z) - Approximating Two-Layer ReLU Networks for Hidden State Analysis in Differential Privacy [3.8254443661593633]
We show that it is possible to privately train convex problems with privacy-utility trade-offs comparable to those of one hidden-layer ReLU networks trained with DP-SGD.
Our experiments on benchmark classification tasks show that NoisyCGD can achieve privacy-utility trade-offs comparable to DP-SGD applied to one-hidden-layer ReLU networks.
arXiv Detail & Related papers (2024-07-05T22:43:32Z) - Adaptive Differential Privacy in Federated Learning: A Priority-Based
Approach [0.0]
Federated learning (FL) develops global models without direct access to local datasets.
DP offers a framework that gives a privacy guarantee by adding certain amounts of noise to parameters.
We propose adaptive noise addition in FL which decides the value of injected noise based on features' relative importance.
arXiv Detail & Related papers (2024-01-04T03:01:15Z) - Initialization Matters: Privacy-Utility Analysis of Overparameterized
Neural Networks [72.51255282371805]
We prove a privacy bound for the KL divergence between model distributions on worst-case neighboring datasets.
We find that this KL privacy bound is largely determined by the expected squared gradient norm relative to model parameters during training.
arXiv Detail & Related papers (2023-10-31T16:13:22Z) - Privacy Loss of Noisy Stochastic Gradient Descent Might Converge Even
for Non-Convex Losses [4.68299658663016]
The Noisy-SGD algorithm is widely used for privately training machine learning models.
Recent findings have shown that if the internal state remains hidden, then the privacy loss might remain bounded.
We address this problem for DP-SGD, a popular variant of Noisy-SGD that incorporates gradient clipping to limit the impact of individual samples on the training process.
arXiv Detail & Related papers (2023-05-17T02:25:56Z) - Differentially Private Stochastic Gradient Descent with Low-Noise [49.981789906200035]
Modern machine learning algorithms aim to extract fine-grained information from data to provide accurate predictions, which often conflicts with the goal of privacy protection.
This paper addresses the practical and theoretical importance of developing privacy-preserving machine learning algorithms that ensure good performance while preserving privacy.
arXiv Detail & Related papers (2022-09-09T08:54:13Z) - DP-UTIL: Comprehensive Utility Analysis of Differential Privacy in
Machine Learning [3.822543555265593]
Differential Privacy (DP) has emerged as a rigorous formalism to reason about privacy leakage.
In machine learning (ML), DP has been employed to limit/disclosure of training examples.
For deep neural networks, gradient perturbation results in lowest privacy leakage.
arXiv Detail & Related papers (2021-12-24T08:40:28Z) - Do Not Let Privacy Overbill Utility: Gradient Embedding Perturbation for
Private Learning [74.73901662374921]
A differentially private model degrades the utility drastically when the model comprises a large number of trainable parameters.
We propose an algorithm emphGradient Embedding Perturbation (GEP) towards training differentially private deep models with decent accuracy.
arXiv Detail & Related papers (2021-02-25T04:29:58Z) - Robustness Threats of Differential Privacy [70.818129585404]
We experimentally demonstrate that networks, trained with differential privacy, in some settings might be even more vulnerable in comparison to non-private versions.
We study how the main ingredients of differentially private neural networks training, such as gradient clipping and noise addition, affect the robustness of the model.
arXiv Detail & Related papers (2020-12-14T18:59:24Z) - Graph-Homomorphic Perturbations for Private Decentralized Learning [64.26238893241322]
Local exchange of estimates allows inference of data based on private data.
perturbations chosen independently at every agent, resulting in a significant performance loss.
We propose an alternative scheme, which constructs perturbations according to a particular nullspace condition, allowing them to be invisible.
arXiv Detail & Related papers (2020-10-23T10:35:35Z) - RDP-GAN: A R\'enyi-Differential Privacy based Generative Adversarial
Network [75.81653258081435]
Generative adversarial network (GAN) has attracted increasing attention recently owing to its impressive ability to generate realistic samples with high privacy protection.
However, when GANs are applied on sensitive or private training examples, such as medical or financial records, it is still probable to divulge individuals' sensitive and private information.
We propose a R'enyi-differentially private-GAN (RDP-GAN), which achieves differential privacy (DP) in a GAN by carefully adding random noises on the value of the loss function during training.
arXiv Detail & Related papers (2020-07-04T09:51:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.