Secure and Differentially Private Bayesian Learning on Distributed Data
- URL: http://arxiv.org/abs/2005.11007v1
- Date: Fri, 22 May 2020 05:13:43 GMT
- Title: Secure and Differentially Private Bayesian Learning on Distributed Data
- Authors: Yeongjae Gil and Xiaoqian Jiang and Miran Kim and Junghye Lee
- Abstract summary: We present a distributed Bayesian learning approach via Preconditioned Langevin Dynamics with RMSprop, which combines differential privacy and homomorphic encryption in a manner while protecting private information.
We applied the proposed secure and privacy-preserving distributed Bayesian learning approach to logistic regression and survival analysis on distributed data, and demonstrated its feasibility in terms of prediction accuracy and time complexity, compared to the centralized approach.
- Score: 17.098036331529784
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Data integration and sharing maximally enhance the potential for novel and
meaningful discoveries. However, it is a non-trivial task as integrating data
from multiple sources can put sensitive information of study participants at
risk. To address the privacy concern, we present a distributed Bayesian
learning approach via Preconditioned Stochastic Gradient Langevin Dynamics with
RMSprop, which combines differential privacy and homomorphic encryption in a
harmonious manner while protecting private information. We applied the proposed
secure and privacy-preserving distributed Bayesian learning approach to
logistic regression and survival analysis on distributed data, and demonstrated
its feasibility in terms of prediction accuracy and time complexity, compared
to the centralized approach.
Related papers
- Initialization Matters: Privacy-Utility Analysis of Overparameterized
Neural Networks [72.51255282371805]
We prove a privacy bound for the KL divergence between model distributions on worst-case neighboring datasets.
We find that this KL privacy bound is largely determined by the expected squared gradient norm relative to model parameters during training.
arXiv Detail & Related papers (2023-10-31T16:13:22Z) - Data Analytics with Differential Privacy [0.0]
We develop differentially private algorithms to analyze distributed and streaming data.
In the distributed model, we consider the particular problem of learning -- in a distributed fashion -- a global model of the data.
We offer one of the strongest privacy guarantees for the streaming model, user-level pan-privacy.
arXiv Detail & Related papers (2023-07-20T17:43:29Z) - Differentially Private Distributed Estimation and Learning [2.4401219403555814]
We study distributed estimation and learning problems in a networked environment.
Agents exchange information to estimate unknown statistical properties of random variables from privately observed samples.
Agents can estimate the unknown quantities by exchanging information about their private observations, but they also face privacy risks.
arXiv Detail & Related papers (2023-06-28T01:41:30Z) - Generalizing Differentially Private Decentralized Deep Learning with Multi-Agent Consensus [11.414398732656839]
We propose a framework that embeds differential privacy into decentralized deep learning and secures each agent's local dataset during and after cooperative training.
We prove convergence guarantees for algorithms derived from this framework and demonstrate its practical utility when applied to subgradient and ADMM decentralized approaches.
arXiv Detail & Related papers (2023-06-24T07:46:00Z) - Decentralized Stochastic Optimization with Inherent Privacy Protection [103.62463469366557]
Decentralized optimization is the basic building block of modern collaborative machine learning, distributed estimation and control, and large-scale sensing.
Since involved data, privacy protection has become an increasingly pressing need in the implementation of decentralized optimization algorithms.
arXiv Detail & Related papers (2022-05-08T14:38:23Z) - Sparsity-Inducing Categorical Prior Improves Robustness of the
Information Bottleneck [4.2903672492917755]
We present a novel sparsity-inducing spike-slab prior that uses sparsity as a mechanism to provide flexibility.
We show that the proposed approach improves the accuracy and robustness compared with the traditional fixed -imensional priors.
arXiv Detail & Related papers (2022-03-04T22:22:51Z) - Graph-Homomorphic Perturbations for Private Decentralized Learning [64.26238893241322]
Local exchange of estimates allows inference of data based on private data.
perturbations chosen independently at every agent, resulting in a significant performance loss.
We propose an alternative scheme, which constructs perturbations according to a particular nullspace condition, allowing them to be invisible.
arXiv Detail & Related papers (2020-10-23T10:35:35Z) - Learning while Respecting Privacy and Robustness to Distributional
Uncertainties and Adversarial Data [66.78671826743884]
The distributionally robust optimization framework is considered for training a parametric model.
The objective is to endow the trained model with robustness against adversarially manipulated input data.
Proposed algorithms offer robustness with little overhead.
arXiv Detail & Related papers (2020-07-07T18:25:25Z) - SPEED: Secure, PrivatE, and Efficient Deep learning [2.283665431721732]
We introduce a deep learning framework able to deal with strong privacy constraints.
Based on collaborative learning, differential privacy and homomorphic encryption, the proposed approach advances state-of-the-art.
arXiv Detail & Related papers (2020-06-16T19:31:52Z) - Differentially Private Federated Learning with Laplacian Smoothing [72.85272874099644]
Federated learning aims to protect data privacy by collaboratively learning a model without sharing private data among users.
An adversary may still be able to infer the private training data by attacking the released model.
Differential privacy provides a statistical protection against such attacks at the price of significantly degrading the accuracy or utility of the trained models.
arXiv Detail & Related papers (2020-05-01T04:28:38Z) - Privacy-preserving Traffic Flow Prediction: A Federated Learning
Approach [61.64006416975458]
We propose a privacy-preserving machine learning technique named Federated Learning-based Gated Recurrent Unit neural network algorithm (FedGRU) for traffic flow prediction.
FedGRU differs from current centralized learning methods and updates universal learning models through a secure parameter aggregation mechanism.
It is shown that FedGRU's prediction accuracy is 90.96% higher than the advanced deep learning models.
arXiv Detail & Related papers (2020-03-19T13:07:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.