A Federated Deep Learning Framework for Privacy Preservation and
Communication Efficiency
- URL: http://arxiv.org/abs/2001.09782v3
- Date: Wed, 5 Jan 2022 05:05:42 GMT
- Title: A Federated Deep Learning Framework for Privacy Preservation and
Communication Efficiency
- Authors: Tien-Dung Cao, Tram Truong-Huu, Hien Tran, and Khanh Tran
- Abstract summary: We develop FedPC, a Federated Deep Learning Framework for Privacy Preservation and Communication Efficiency.
FedPC allows a model to be learned on multiple private datasets while not revealing any information of training data, even with intermediate data.
Results show that FedPC maintains the performance approximation of the models within $8.5%$ of the centrally-trained models when data is distributed to 10 computing nodes.
- Score: 1.2599533416395765
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning has achieved great success in many applications. However, its
deployment in practice has been hurdled by two issues: the privacy of data that
has to be aggregated centrally for model training and high communication
overhead due to transmission of a large amount of data usually geographically
distributed. Addressing both issues is challenging and most existing works
could not provide an efficient solution. In this paper, we develop FedPC, a
Federated Deep Learning Framework for Privacy Preservation and Communication
Efficiency. The framework allows a model to be learned on multiple private
datasets while not revealing any information of training data, even with
intermediate data. The framework also minimizes the amount of data exchanged to
update the model. We formally prove the convergence of the learning model when
training with FedPC and its privacy-preserving property. We perform extensive
experiments to evaluate the performance of FedPC in terms of the approximation
to the upper-bound performance (when training centrally) and communication
overhead. The results show that FedPC maintains the performance approximation
of the models within $8.5\%$ of the centrally-trained models when data is
distributed to 10 computing nodes. FedPC also reduces the communication
overhead by up to $42.20\%$ compared to existing works.
Related papers
- Blockchain-enabled Trustworthy Federated Unlearning [50.01101423318312]
Federated unlearning is a promising paradigm for protecting the data ownership of distributed clients.
Existing works require central servers to retain the historical model parameters from distributed clients.
This paper proposes a new blockchain-enabled trustworthy federated unlearning framework.
arXiv Detail & Related papers (2024-01-29T07:04:48Z) - FedDBL: Communication and Data Efficient Federated Deep-Broad Learning
for Histopathological Tissue Classification [65.7405397206767]
We propose Federated Deep-Broad Learning (FedDBL) to achieve superior classification performance with limited training samples and only one-round communication.
FedDBL greatly outperforms the competitors with only one-round communication and limited training samples, while it even achieves comparable performance with the ones under multiple-round communications.
Since no data or deep model sharing across different clients, the privacy issue is well-solved and the model security is guaranteed with no model inversion attack risk.
arXiv Detail & Related papers (2023-02-24T14:27:41Z) - Federated Pruning: Improving Neural Network Efficiency with Federated
Learning [24.36174705715827]
We propose Federated Pruning to train a reduced model under the federated setting.
We explore different pruning schemes and provide empirical evidence of the effectiveness of our methods.
arXiv Detail & Related papers (2022-09-14T00:48:37Z) - FedDM: Iterative Distribution Matching for Communication-Efficient
Federated Learning [87.08902493524556]
Federated learning(FL) has recently attracted increasing attention from academia and industry.
We propose FedDM to build the global training objective from multiple local surrogate functions.
In detail, we construct synthetic sets of data on each client to locally match the loss landscape from original data.
arXiv Detail & Related papers (2022-07-20T04:55:18Z) - Acceleration of Federated Learning with Alleviated Forgetting in Local
Training [61.231021417674235]
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy.
We propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage.
Our experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep.
arXiv Detail & Related papers (2022-03-05T02:31:32Z) - FedKD: Communication Efficient Federated Learning via Knowledge
Distillation [56.886414139084216]
Federated learning is widely used to learn intelligent models from decentralized data.
In federated learning, clients need to communicate their local model updates in each iteration of model learning.
We propose a communication efficient federated learning method based on knowledge distillation.
arXiv Detail & Related papers (2021-08-30T15:39:54Z) - Communication-Efficient Federated Learning with Dual-Side Low-Rank
Compression [8.353152693578151]
Federated learning (FL) is a promising and powerful approach for training deep learning models without sharing the raw data of clients.
We propose a new training method, referred to as federated learning with dual-side low-rank compression (FedDLR)
We show that FedDLR outperforms the state-of-the-art solutions in terms of both the communication and efficiency.
arXiv Detail & Related papers (2021-04-26T09:13:31Z) - CatFedAvg: Optimising Communication-efficiency and Classification
Accuracy in Federated Learning [2.2172881631608456]
We introduce a new family of Federated Learning algorithms called CatFedAvg.
It improves the communication efficiency but improves the quality of learning using a category coverage inNIST strategy.
Our experiments show that an increase of 10% absolute points accuracy using the M dataset with 70% absolute points lower network transfer over FedAvg.
arXiv Detail & Related papers (2020-11-14T06:52:02Z) - WAFFLe: Weight Anonymized Factorization for Federated Learning [88.44939168851721]
In domains where data are sensitive or private, there is great value in methods that can learn in a distributed manner without the data ever leaving the local devices.
We propose Weight Anonymized Factorization for Federated Learning (WAFFLe), an approach that combines the Indian Buffet Process with a shared dictionary of weight factors for neural networks.
arXiv Detail & Related papers (2020-08-13T04:26:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.