Communication-Computation Efficient Secure Aggregation for Federated
Learning
- URL: http://arxiv.org/abs/2012.05433v2
- Date: Mon, 21 Dec 2020 03:15:44 GMT
- Title: Communication-Computation Efficient Secure Aggregation for Federated
Learning
- Authors: Beongjun Choi, Jy-yong Sohn, Dong-Jun Han and Jaekyun Moon
- Abstract summary: Federated learning is a way to train neural networks using data distributed over multiple nodes without the need for the nodes to share data.
A recent solution based on the secure aggregation primitive enabled privacy-preserving federated learning, but at the expense of significant extra communication/computational resources.
We propose communication-computation efficient secure aggregation which substantially reduces the amount of communication/computational resources.
- Score: 23.924656276456503
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning has been spotlighted as a way to train neural networks
using data distributed over multiple nodes without the need for the nodes to
share data. Unfortunately, it has also been shown that data privacy could not
be fully guaranteed as adversaries may be able to extract certain information
on local data from the model parameters transmitted during federated learning.
A recent solution based on the secure aggregation primitive enabled
privacy-preserving federated learning, but at the expense of significant extra
communication/computational resources. In this paper, we propose
communication-computation efficient secure aggregation which substantially
reduces the amount of communication/computational resources relative to the
existing secure solution without sacrificing data privacy. The key idea behind
the suggested scheme is to design the topology of the secret-sharing nodes as
sparse random graphs instead of the complete graph corresponding to the
existing solution. We first obtain the necessary and sufficient condition on
the graph to guarantee reliable and private federated learning in the
information-theoretic sense. We then suggest using the Erd\H{o}s-R\'enyi graph
in particular and provide theoretical guarantees on the reliability/privacy of
the proposed scheme. Through extensive real-world experiments, we demonstrate
that our scheme, using only $20 \sim 30\%$ of the resources required in the
conventional scheme, maintains virtually the same levels of reliability and
data privacy in practical federated learning systems.
Related papers
- UFed-GAN: A Secure Federated Learning Framework with Constrained
Computation and Unlabeled Data [50.13595312140533]
We propose a novel framework of UFed-GAN: Unsupervised Federated Generative Adversarial Network, which can capture user-side data distribution without local classification training.
Our experimental results demonstrate the strong potential of UFed-GAN in addressing limited computational resources and unlabeled data while preserving privacy.
arXiv Detail & Related papers (2023-08-10T22:52:13Z) - Graph Federated Learning Based on the Decentralized Framework [8.619889123184649]
Graph-federated learning is mainly based on the classical federated learning framework i.e., the Client-Server framework.
We introduce the decentralized framework to graph-federated learning.
The proposed method is compared with FedAvg, Fedprox, GCFL, and GCFL+ to verify the effectiveness of the proposed method.
arXiv Detail & Related papers (2023-07-19T07:40:51Z) - Semi-decentralized Federated Ego Graph Learning for Recommendation [58.21409625065663]
We propose a semi-decentralized federated ego graph learning framework for on-device recommendations, named SemiDFEGL.
The proposed framework is model-agnostic, meaning that it could be seamlessly integrated with existing graph neural network-based recommendation methods and privacy protection techniques.
arXiv Detail & Related papers (2023-02-10T03:57:45Z) - Federated Learning with Privacy-Preserving Ensemble Attention
Distillation [63.39442596910485]
Federated Learning (FL) is a machine learning paradigm where many local nodes collaboratively train a central model while keeping the training data decentralized.
We propose a privacy-preserving FL framework leveraging unlabeled public data for one-way offline knowledge distillation.
Our technique uses decentralized and heterogeneous local data like existing FL approaches, but more importantly, it significantly reduces the risk of privacy leakage.
arXiv Detail & Related papers (2022-10-16T06:44:46Z) - Preserving Privacy in Federated Learning with Ensemble Cross-Domain
Knowledge Distillation [22.151404603413752]
Federated Learning (FL) is a machine learning paradigm where local nodes collaboratively train a central model.
Existing FL methods typically share model parameters or employ co-distillation to address the issue of unbalanced data distribution.
We develop a privacy preserving and communication efficient method in a FL framework with one-shot offline knowledge distillation.
arXiv Detail & Related papers (2022-09-10T05:20:31Z) - Local Learning Matters: Rethinking Data Heterogeneity in Federated
Learning [61.488646649045215]
Federated learning (FL) is a promising strategy for performing privacy-preserving, distributed learning with a network of clients (i.e., edge devices)
arXiv Detail & Related papers (2021-11-28T19:03:39Z) - Communication-Efficient Hierarchical Federated Learning for IoT
Heterogeneous Systems with Imbalanced Data [42.26599494940002]
Federated learning (FL) is a distributed learning methodology that allows multiple nodes to cooperatively train a deep learning model.
This paper studies the potential of hierarchical FL in IoT heterogeneous systems.
It proposes an optimized solution for user assignment and resource allocation on multiple edge nodes.
arXiv Detail & Related papers (2021-07-14T08:32:39Z) - Weight Divergence Driven Divide-and-Conquer Approach for Optimal
Federated Learning from non-IID Data [0.0]
Federated Learning allows training of data stored in distributed devices without the need for centralizing training data.
We propose a novel Divide-and-Conquer training methodology that enables the use of the popular FedAvg aggregation algorithm.
arXiv Detail & Related papers (2021-06-28T09:34:20Z) - Graph-Homomorphic Perturbations for Private Decentralized Learning [64.26238893241322]
Local exchange of estimates allows inference of data based on private data.
perturbations chosen independently at every agent, resulting in a significant performance loss.
We propose an alternative scheme, which constructs perturbations according to a particular nullspace condition, allowing them to be invisible.
arXiv Detail & Related papers (2020-10-23T10:35:35Z) - FedSKETCH: Communication-Efficient and Private Federated Learning via
Sketching [33.54413645276686]
Communication complexity and privacy are the two key challenges in Federated Learning.
We introduce FedSKETCH and FedSKETCHGATE algorithms to address both challenges in Federated learning jointly.
arXiv Detail & Related papers (2020-08-11T19:22:48Z) - Privacy-preserving Traffic Flow Prediction: A Federated Learning
Approach [61.64006416975458]
We propose a privacy-preserving machine learning technique named Federated Learning-based Gated Recurrent Unit neural network algorithm (FedGRU) for traffic flow prediction.
FedGRU differs from current centralized learning methods and updates universal learning models through a secure parameter aggregation mechanism.
It is shown that FedGRU's prediction accuracy is 90.96% higher than the advanced deep learning models.
arXiv Detail & Related papers (2020-03-19T13:07:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.