GRAFFL: Gradient-free Federated Learning of a Bayesian Generative Model
- URL: http://arxiv.org/abs/2008.12925v1
- Date: Sat, 29 Aug 2020 07:19:44 GMT
- Title: GRAFFL: Gradient-free Federated Learning of a Bayesian Generative Model
- Authors: Seok-Ju Hahn, Junghye Lee
- Abstract summary: This paper presents the first gradient-free federated learning framework called GRAFFL.
It uses implicit information derived from each participating institution to learn posterior distributions of parameters.
We propose the GRAFFL-based Bayesian mixture model to serve as a proof-of-concept of the framework.
- Score: 8.87104231451079
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning platforms are gaining popularity. One of the major
benefits is to mitigate the privacy risks as the learning of algorithms can be
achieved without collecting or sharing data. While federated learning (i.e.,
many based on stochastic gradient algorithms) has shown great promise, there
are still many challenging problems in protecting privacy, especially during
the process of gradients update and exchange. This paper presents the first
gradient-free federated learning framework called GRAFFL for learning a
Bayesian generative model based on approximate Bayesian computation. Unlike
conventional federated learning algorithms based on gradients, our framework
does not require to disassemble a model (i.e., to linear components) or to
perturb data (or encryption of data for aggregation) to preserve privacy.
Instead, this framework uses implicit information derived from each
participating institution to learn posterior distributions of parameters. The
implicit information is summary statistics derived from SuffiAE that is a
neural network developed in this study to create compressed and linearly
separable representations thereby protecting sensitive information from
leakage. As a sufficient dimensionality reduction technique, this is proved to
provide sufficient summary statistics. We propose the GRAFFL-based Bayesian
Gaussian mixture model to serve as a proof-of-concept of the framework. Using
several datasets, we demonstrated the feasibility and usefulness of our model
in terms of privacy protection and prediction performance (i.e., close to an
ideal setting). The trained model as a quasi-global model can generate
informative samples involving information from other institutions and enhances
data analysis of each institution.
Related papers
- QBI: Quantile-Based Bias Initialization for Efficient Private Data Reconstruction in Federated Learning [0.5497663232622965]
Federated learning enables the training of machine learning models on distributed data without compromising user privacy.
Recent research has shown that the central entity can perfectly reconstruct private data from shared model updates.
arXiv Detail & Related papers (2024-06-26T20:19:32Z) - Federated Face Forgery Detection Learning with Personalized Representation [63.90408023506508]
Deep generator technology can produce high-quality fake videos that are indistinguishable, posing a serious social threat.
Traditional forgery detection methods directly centralized training on data.
The paper proposes a novel federated face forgery detection learning with personalized representation.
arXiv Detail & Related papers (2024-06-17T02:20:30Z) - Learn to Unlearn for Deep Neural Networks: Minimizing Unlearning
Interference with Gradient Projection [56.292071534857946]
Recent data-privacy laws have sparked interest in machine unlearning.
Challenge is to discard information about the forget'' data without altering knowledge about remaining dataset.
We adopt a projected-gradient based learning method, named as Projected-Gradient Unlearning (PGU)
We provide empirically evidence to demonstrate that our unlearning method can produce models that behave similar to models retrained from scratch across various metrics even when the training dataset is no longer accessible.
arXiv Detail & Related papers (2023-12-07T07:17:24Z) - VertiBayes: Learning Bayesian network parameters from vertically partitioned data with missing values [2.9707233220536313]
Federated learning makes it possible to train a machine learning model on decentralized data.
We propose a novel method called VertiBayes to train Bayesian networks on vertically partitioned data.
We experimentally show our approach produces models comparable to those learnt using traditional algorithms.
arXiv Detail & Related papers (2022-10-31T11:13:35Z) - Privacy-Preserved Neural Graph Similarity Learning [99.78599103903777]
We propose a novel Privacy-Preserving neural Graph Matching network model, named PPGM, for graph similarity learning.
To prevent reconstruction attacks, the proposed model does not communicate node-level representations between devices.
To alleviate the attacks to graph properties, the obfuscated features that contain information from both vectors are communicated.
arXiv Detail & Related papers (2022-10-21T04:38:25Z) - Do Gradient Inversion Attacks Make Federated Learning Unsafe? [70.0231254112197]
Federated learning (FL) allows the collaborative training of AI models without needing to share raw data.
Recent works on the inversion of deep neural networks from model gradients raised concerns about the security of FL in preventing the leakage of training data.
In this work, we show that these attacks presented in the literature are impractical in real FL use-cases and provide a new baseline attack.
arXiv Detail & Related papers (2022-02-14T18:33:12Z) - Secure Neuroimaging Analysis using Federated Learning with Homomorphic
Encryption [14.269757725951882]
Federated learning (FL) enables distributed computation of machine learning models over disparate, remote data sources.
Recent membership attacks show that private or sensitive personal data can sometimes be leaked or inferred when model parameters or summary statistics are shared with a central site.
We propose a framework for secure FL using fully-homomorphic encryption (FHE)
arXiv Detail & Related papers (2021-08-07T12:15:52Z) - GRNN: Generative Regression Neural Network -- A Data Leakage Attack for
Federated Learning [3.050919759387984]
We show that image-based privacy data can be easily recovered in full from the shared gradient only via our proposed Generative Regression Neural Network (GRNN)
We evaluate our method on several image classification tasks. The results illustrate that our proposed GRNN outperforms state-of-the-art methods with better stability, stronger, and higher accuracy.
arXiv Detail & Related papers (2021-05-02T18:39:37Z) - Learning while Respecting Privacy and Robustness to Distributional
Uncertainties and Adversarial Data [66.78671826743884]
The distributionally robust optimization framework is considered for training a parametric model.
The objective is to endow the trained model with robustness against adversarially manipulated input data.
Proposed algorithms offer robustness with little overhead.
arXiv Detail & Related papers (2020-07-07T18:25:25Z) - Large-Scale Secure XGB for Vertical Federated Learning [15.864654742542246]
In this paper, we aim to build large-scale secure XGB under vertically federated learning setting.
We employ secure multi-party computation techniques to avoid leaking intermediate information during training.
By proposing secure permutation protocols, we can improve the training efficiency and make the framework scale to large dataset.
arXiv Detail & Related papers (2020-05-18T06:31:10Z) - Differentially Private Federated Learning with Laplacian Smoothing [72.85272874099644]
Federated learning aims to protect data privacy by collaboratively learning a model without sharing private data among users.
An adversary may still be able to infer the private training data by attacking the released model.
Differential privacy provides a statistical protection against such attacks at the price of significantly degrading the accuracy or utility of the trained models.
arXiv Detail & Related papers (2020-05-01T04:28:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.