Differentially Private Neural Network Training under Hidden State Assumption
- URL: http://arxiv.org/abs/2407.08233v1
- Date: Thu, 11 Jul 2024 07:14:40 GMT
- Title: Differentially Private Neural Network Training under Hidden State Assumption
- Authors: Ding Chen, Chen Liu,
- Abstract summary: We present a novel approach called differentially private neural block coordinate descent (DP-SBCD) for neural networks with provable guarantees of privacy under the hidden state assumption.
- Score: 2.3371504588528635
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present a novel approach called differentially private stochastic block coordinate descent (DP-SBCD) for training neural networks with provable guarantees of differential privacy under the hidden state assumption. Our methodology incorporates Lipschitz neural networks and decomposes the training process of the neural network into sub-problems, each corresponding to the training of a specific layer. By doing so, we extend the analysis of differential privacy under the hidden state assumption to encompass non-convex problems and algorithms employing proximal gradient descent. Furthermore, in contrast to existing methods, we adopt a novel approach by utilizing calibrated noise sampled from adaptive distributions, yielding improved empirical trade-offs between utility and privacy.
Related papers
- Approximating Two-Layer ReLU Networks for Hidden State Analysis in Differential Privacy [3.8254443661593633]
We show that it is possible to privately train convex problems with privacy-utility trade-offs comparable to those of one hidden-layer ReLU networks trained with DP-SGD.
Our experiments on benchmark classification tasks show that NoisyCGD can achieve privacy-utility trade-offs comparable to DP-SGD applied to one-hidden-layer ReLU networks.
arXiv Detail & Related papers (2024-07-05T22:43:32Z) - Initialization Matters: Privacy-Utility Analysis of Overparameterized
Neural Networks [72.51255282371805]
We prove a privacy bound for the KL divergence between model distributions on worst-case neighboring datasets.
We find that this KL privacy bound is largely determined by the expected squared gradient norm relative to model parameters during training.
arXiv Detail & Related papers (2023-10-31T16:13:22Z) - DP-SGD with weight clipping [1.0878040851638]
We present a novel approach that mitigates the bias arising from traditional gradient clipping.
By leveraging a public upper bound of the Lipschitz value of the current model and its current location within the search domain, we can achieve refined noise level adjustments.
arXiv Detail & Related papers (2023-10-27T09:17:15Z) - Domain Generalization Guided by Gradient Signal to Noise Ratio of
Parameters [69.24377241408851]
Overfitting to the source domain is a common issue in gradient-based training of deep neural networks.
We propose to base the selection on gradient-signal-to-noise ratio (GSNR) of network's parameters.
arXiv Detail & Related papers (2023-10-11T10:21:34Z) - Local Graph-homomorphic Processing for Privatized Distributed Systems [57.14673504239551]
We show that the added noise does not affect the performance of the learned model.
This is a significant improvement to previous works on differential privacy for distributed algorithms.
arXiv Detail & Related papers (2022-10-26T10:00:14Z) - NeuralDP Differentially private neural networks by design [61.675604648670095]
We propose NeuralDP, a technique for privatising activations of some layer within a neural network.
We experimentally demonstrate on two datasets that our method offers substantially improved privacy-utility trade-offs compared to DP-SGD.
arXiv Detail & Related papers (2021-07-30T12:40:19Z) - Differentially private training of neural networks with Langevin
dynamics forcalibrated predictive uncertainty [58.730520380312676]
We show that differentially private gradient descent (DP-SGD) can yield poorly calibrated, overconfident deep learning models.
This represents a serious issue for safety-critical applications, e.g. in medical diagnosis.
arXiv Detail & Related papers (2021-07-09T08:14:45Z) - Proxy Convexity: A Unified Framework for the Analysis of Neural Networks
Trained by Gradient Descent [95.94432031144716]
We propose a unified non- optimization framework for the analysis of a learning network.
We show that existing guarantees can be trained unified through gradient descent.
arXiv Detail & Related papers (2021-06-25T17:45:00Z) - Wide Network Learning with Differential Privacy [7.453881927237143]
Current generation of neural networks suffers significant loss accuracy under most practically relevant privacy training regimes.
We develop a general approach towards training these models that takes advantage of the sparsity of the gradients of private Empirical Minimization (ERM)
Following the same number of parameters, we propose a novel algorithm for privately training neural networks.
arXiv Detail & Related papers (2021-03-01T20:31:50Z) - Differentially Private Deep Learning with Direct Feedback Alignment [15.410557873153833]
We propose the first differentially private method for training deep neural networks with direct feedback alignment (DFA)
DFA achieves significant gains in accuracy (often by 10-20%) compared to backprop-based differentially private training on a variety of architectures.
arXiv Detail & Related papers (2020-10-08T00:25:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.