Differentially Private Bayesian Neural Networks on Accuracy, Privacy and
Reliability
- URL: http://arxiv.org/abs/2107.08461v1
- Date: Sun, 18 Jul 2021 14:37:07 GMT
- Title: Differentially Private Bayesian Neural Networks on Accuracy, Privacy and
Reliability
- Authors: Qiyiwen Zhang, Zhiqi Bu, Kan Chen, Qi Long
- Abstract summary: We analyze the trade-off between privacy and accuracy in Bayesian neural network (BNN)
We propose three DP-BNNs that characterize the weight uncertainty for the same network architecture in distinct ways.
We show a new equivalence between DP-SGLD and DP-SGLD, implying that some non-Bayesian DP training naturally allows for uncertainty quantification.
- Score: 18.774153273396244
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Bayesian neural network (BNN) allows for uncertainty quantification in
prediction, offering an advantage over regular neural networks that has not
been explored in the differential privacy (DP) framework. We fill this
important gap by leveraging recent development in Bayesian deep learning and
privacy accounting to offer a more precise analysis of the trade-off between
privacy and accuracy in BNN. We propose three DP-BNNs that characterize the
weight uncertainty for the same network architecture in distinct ways, namely
DP-SGLD (via the noisy gradient method), DP-BBP (via changing the parameters of
interest) and DP-MC Dropout (via the model architecture). Interestingly, we
show a new equivalence between DP-SGD and DP-SGLD, implying that some
non-Bayesian DP training naturally allows for uncertainty quantification.
However, the hyperparameters such as learning rate and batch size, can have
different or even opposite effects in DP-SGD and DP-SGLD.
Extensive experiments are conducted to compare DP-BNNs, in terms of privacy
guarantee, prediction accuracy, uncertainty quantification, calibration,
computation speed, and generalizability to network architecture. As a result,
we observe a new tradeoff between the privacy and the reliability. When
compared to non-DP and non-Bayesian approaches, DP-SGLD is remarkably accurate
under strong privacy guarantee, demonstrating the great potential of DP-BNN in
real-world tasks.
Related papers
- How Private are DP-SGD Implementations? [61.19794019914523]
We show that there can be a substantial gap between the privacy analysis when using the two types of batch sampling.
Our result shows that there can be a substantial gap between the privacy analysis when using the two types of batch sampling.
arXiv Detail & Related papers (2024-03-26T13:02:43Z) - Blink: Link Local Differential Privacy in Graph Neural Networks via
Bayesian Estimation [79.64626707978418]
We propose using link local differential privacy over decentralized nodes to train graph neural networks.
Our approach spends the privacy budget separately on links and degrees of the graph for the server to better denoise the graph topology.
Our approach outperforms existing methods in terms of accuracy under varying privacy budgets.
arXiv Detail & Related papers (2023-09-06T17:53:31Z) - Heterogeneous Randomized Response for Differential Privacy in Graph
Neural Networks [18.4005860362025]
Graph neural networks (GNNs) are susceptible to privacy inference attacks (PIAs)
We propose a novel mechanism to protect nodes' features and edges against PIAs under differential privacy (DP) guarantees.
We derive significantly better randomization probabilities and tighter error bounds at both levels of nodes' features and edges.
arXiv Detail & Related papers (2022-11-10T18:52:46Z) - DPIS: An Enhanced Mechanism for Differentially Private SGD with Importance Sampling [23.8561225168394]
differential privacy (DP) has become a well-accepted standard for privacy protection, and deep neural networks (DNN) have been immensely successful in machine learning.
A classic mechanism for this purpose is DP-SGD, which is a differentially private version of the gradient descent (SGD) commonly used for training.
We propose DPIS, a novel mechanism for differentially private SGD training that can be used as a drop-in replacement of the core of DP-SGD.
arXiv Detail & Related papers (2022-10-18T07:03:14Z) - Dynamic Differential-Privacy Preserving SGD [19.273542515320372]
Differentially-Private Gradient Descent (DP-SGD) prevents training-data privacy breaches by adding noise to the clipped gradient during SGD training.
The same clipping operation and additive noise across training steps results in unstable updates and even a ramp-up period.
We propose the dynamic DP-SGD, which has a lower privacy cost than the DP-SGD during updates until they achieve the same target privacy budget.
arXiv Detail & Related papers (2021-10-30T04:45:11Z) - Differentially Private Federated Bayesian Optimization with Distributed
Exploration [48.9049546219643]
We introduce differential privacy (DP) into the training of deep neural networks through a general framework for adding DP to iterative algorithms.
We show that DP-FTS-DE achieves high utility (competitive performance) with a strong privacy guarantee.
We also use real-world experiments to show that DP-FTS-DE induces a trade-off between privacy and utility.
arXiv Detail & Related papers (2021-10-27T04:11:06Z) - NeuralDP Differentially private neural networks by design [61.675604648670095]
We propose NeuralDP, a technique for privatising activations of some layer within a neural network.
We experimentally demonstrate on two datasets that our method offers substantially improved privacy-utility trade-offs compared to DP-SGD.
arXiv Detail & Related papers (2021-07-30T12:40:19Z) - Differentially private training of neural networks with Langevin
dynamics forcalibrated predictive uncertainty [58.730520380312676]
We show that differentially private gradient descent (DP-SGD) can yield poorly calibrated, overconfident deep learning models.
This represents a serious issue for safety-critical applications, e.g. in medical diagnosis.
arXiv Detail & Related papers (2021-07-09T08:14:45Z) - Encoding the latent posterior of Bayesian Neural Networks for
uncertainty quantification [10.727102755903616]
We aim for efficient deep BNNs amenable to complex computer vision architectures.
We achieve this by leveraging variational autoencoders (VAEs) to learn the interaction and the latent distribution of the parameters at each network layer.
Our approach, Latent-Posterior BNN (LP-BNN), is compatible with the recent BatchEnsemble method, leading to highly efficient (in terms of computation and memory during both training and testing) ensembles.
arXiv Detail & Related papers (2020-12-04T19:50:09Z) - Unlabelled Data Improves Bayesian Uncertainty Calibration under
Covariate Shift [100.52588638477862]
We develop an approximate Bayesian inference scheme based on posterior regularisation.
We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.
arXiv Detail & Related papers (2020-06-26T13:50:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.