Sensitivity analysis in differentially private machine learning using
hybrid automatic differentiation
- URL: http://arxiv.org/abs/2107.04265v1
- Date: Fri, 9 Jul 2021 07:19:23 GMT
- Title: Sensitivity analysis in differentially private machine learning using
hybrid automatic differentiation
- Authors: Alexander Ziller, Dmitrii Usynin, Moritz Knolle, Kritika Prakash,
Andrew Trask, Rickmer Braren, Marcus Makowski, Daniel Rueckert, Georgios
Kaissis
- Abstract summary: We introduce a novel textithybrid automatic differentiation (AD) system for sensitivity analysis.
This enables modelling the sensitivity of arbitrary differentiable function compositions, such as the training of neural networks on private data.
Our approach can enable the principled reasoning about privacy loss in the setting of data processing.
- Score: 54.88777449903538
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In recent years, formal methods of privacy protection such as differential
privacy (DP), capable of deployment to data-driven tasks such as machine
learning (ML), have emerged. Reconciling large-scale ML with the closed-form
reasoning required for the principled analysis of individual privacy loss
requires the introduction of new tools for automatic sensitivity analysis and
for tracking an individual's data and their features through the flow of
computation. For this purpose, we introduce a novel \textit{hybrid} automatic
differentiation (AD) system which combines the efficiency of reverse-mode AD
with an ability to obtain a closed-form expression for any given quantity in
the computational graph. This enables modelling the sensitivity of arbitrary
differentiable function compositions, such as the training of neural networks
on private data. We demonstrate our approach by analysing the individual DP
guarantees of statistical database queries. Moreover, we investigate the
application of our technique to the training of DP neural networks. Our
approach can enable the principled reasoning about privacy loss in the setting
of data processing, and further the development of automatic sensitivity
analysis and privacy budgeting systems.
Related papers
- Differential Privacy Mechanisms in Neural Tangent Kernel Regression [29.187250620950927]
We study differential privacy (DP) in the Neural Tangent Kernel (NTK) regression setting.
We show provable guarantees for both differential privacy and test accuracy of our NTK regression.
To our knowledge, this is the first work to provide a DP guarantee for NTK regression.
arXiv Detail & Related papers (2024-07-18T15:57:55Z) - Initialization Matters: Privacy-Utility Analysis of Overparameterized
Neural Networks [72.51255282371805]
We prove a privacy bound for the KL divergence between model distributions on worst-case neighboring datasets.
We find that this KL privacy bound is largely determined by the expected squared gradient norm relative to model parameters during training.
arXiv Detail & Related papers (2023-10-31T16:13:22Z) - Differentially Private Linear Regression with Linked Data [3.9325957466009203]
Differential privacy, a mathematical notion from computer science, is a rising tool offering robust privacy guarantees.
Recent work focuses on developing differentially private versions of individual statistical and machine learning tasks.
We present two differentially private algorithms for linear regression with linked data.
arXiv Detail & Related papers (2023-08-01T21:00:19Z) - Federated Stochastic Gradient Descent Begets Self-Induced Momentum [151.4322255230084]
Federated learning (FL) is an emerging machine learning method that can be applied in mobile edge systems.
We show that running to the gradient descent (SGD) in such a setting can be viewed as adding a momentum-like term to the global aggregation process.
arXiv Detail & Related papers (2022-02-17T02:01:37Z) - DP-UTIL: Comprehensive Utility Analysis of Differential Privacy in
Machine Learning [3.822543555265593]
Differential Privacy (DP) has emerged as a rigorous formalism to reason about privacy leakage.
In machine learning (ML), DP has been employed to limit/disclosure of training examples.
For deep neural networks, gradient perturbation results in lowest privacy leakage.
arXiv Detail & Related papers (2021-12-24T08:40:28Z) - Partial sensitivity analysis in differential privacy [58.730520380312676]
We investigate the impact of each input feature on the individual's privacy loss.
We experimentally evaluate our approach on queries over private databases.
We also explore our findings in the context of neural network training on synthetic data.
arXiv Detail & Related papers (2021-09-22T08:29:16Z) - An automatic differentiation system for the age of differential privacy [65.35244647521989]
Tritium is an automatic differentiation-based sensitivity analysis framework for differentially private (DP) machine learning (ML)
We introduce Tritium, an automatic differentiation-based sensitivity analysis framework for differentially private (DP) machine learning (ML)
arXiv Detail & Related papers (2021-09-22T08:07:42Z) - NeuralDP Differentially private neural networks by design [61.675604648670095]
We propose NeuralDP, a technique for privatising activations of some layer within a neural network.
We experimentally demonstrate on two datasets that our method offers substantially improved privacy-utility trade-offs compared to DP-SGD.
arXiv Detail & Related papers (2021-07-30T12:40:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.