H-FL: A Hierarchical Communication-Efficient and Privacy-Protected
Architecture for Federated Learning
- URL: http://arxiv.org/abs/2106.00275v1
- Date: Tue, 1 Jun 2021 07:15:31 GMT
- Title: H-FL: A Hierarchical Communication-Efficient and Privacy-Protected
Architecture for Federated Learning
- Authors: He Yang
- Abstract summary: We propose a novel framework called hierarchical federated learning (H-FL) to tackle this challenge.
Considering the degradation of the model performance due to the statistic heterogeneity of the training data, we devise a runtime distribution reconstruction strategy.
In addition, we design a compression-correction mechanism incorporated into H-FL to reduce the communication overhead while not sacrificing the model performance.
- Score: 0.2741266294612776
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The longstanding goals of federated learning (FL) require rigorous privacy
guarantees and low communication overhead while holding a relatively high model
accuracy. However, simultaneously achieving all the goals is extremely
challenging. In this paper, we propose a novel framework called hierarchical
federated learning (H-FL) to tackle this challenge. Considering the degradation
of the model performance due to the statistic heterogeneity of the training
data, we devise a runtime distribution reconstruction strategy, which
reallocates the clients appropriately and utilizes mediators to rearrange the
local training of the clients. In addition, we design a compression-correction
mechanism incorporated into H-FL to reduce the communication overhead while not
sacrificing the model performance. To further provide privacy guarantees, we
introduce differential privacy while performing local training, which injects
moderate amount of noise into only part of the complete model. Experimental
results show that our H-FL framework achieves the state-of-art performance on
different datasets for the real-world image recognition tasks.
Related papers
- Privacy-Preserving Federated Learning with Consistency via Knowledge Distillation Using Conditional Generator [19.00239208095762]
Federated Learning (FL) is gaining popularity as a distributed learning framework that only shares model parameters or updates and keeps private data locally.
We propose FedMD-CG, a novel FL method with highly competitive performance and high-level privacy preservation.
We conduct extensive experiments on various image classification tasks to validate the superiority of FedMD-CG.
arXiv Detail & Related papers (2024-09-11T02:36:36Z) - TriplePlay: Enhancing Federated Learning with CLIP for Non-IID Data and Resource Efficiency [0.0]
TriplePlay is a framework that integrates CLIP as an adapter to enhance FL's adaptability and performance across diverse data distributions.
Our simulation results demonstrate that TriplePlay effectively decreases GPU usage costs and speeds up the learning process, achieving convergence with reduced communication overhead.
arXiv Detail & Related papers (2024-09-09T06:04:42Z) - Towards Robust Federated Learning via Logits Calibration on Non-IID Data [49.286558007937856]
Federated learning (FL) is a privacy-preserving distributed management framework based on collaborative model training of distributed devices in edge networks.
Recent studies have shown that FL is vulnerable to adversarial examples, leading to a significant drop in its performance.
In this work, we adopt the adversarial training (AT) framework to improve the robustness of FL models against adversarial example (AE) attacks.
arXiv Detail & Related papers (2024-03-05T09:18:29Z) - Federated Deep Equilibrium Learning: Harnessing Compact Global Representations to Enhance Personalization [23.340237814344377]
Federated Learning (FL) has emerged as a groundbreaking distributed learning paradigm enabling clients to train a global model collaboratively without exchanging data.
We introduce FeDEQ, a novel FL framework that incorporates deep equilibrium learning and consensus optimization to harness compact global data representations for efficient personalization.
We show that FeDEQ matches the performance of state-of-the-art personalized FL methods, while significantly reducing communication size by up to 4 times and memory footprint by 1.5 times during training.
arXiv Detail & Related papers (2023-09-27T13:48:12Z) - Personalizing Federated Learning with Over-the-Air Computations [84.8089761800994]
Federated edge learning is a promising technology to deploy intelligence at the edge of wireless networks in a privacy-preserving manner.
Under such a setting, multiple clients collaboratively train a global generic model under the coordination of an edge server.
This paper presents a distributed training paradigm that employs analog over-the-air computation to address the communication bottleneck.
arXiv Detail & Related papers (2023-02-24T08:41:19Z) - Stochastic Coded Federated Learning: Theoretical Analysis and Incentive
Mechanism Design [18.675244280002428]
We propose a novel FL framework named coded federated learning (SCFL) that leverages coded computing techniques.
In SCFL, each edge device uploads a privacy-preserving coded dataset to the server, which is generated by adding noise to the projected local dataset.
We show that SCFL learns a better model within the given time and achieves a better privacy-performance tradeoff than the baseline methods.
arXiv Detail & Related papers (2022-11-08T09:58:36Z) - FedDM: Iterative Distribution Matching for Communication-Efficient
Federated Learning [87.08902493524556]
Federated learning(FL) has recently attracted increasing attention from academia and industry.
We propose FedDM to build the global training objective from multiple local surrogate functions.
In detail, we construct synthetic sets of data on each client to locally match the loss landscape from original data.
arXiv Detail & Related papers (2022-07-20T04:55:18Z) - Fine-tuning Global Model via Data-Free Knowledge Distillation for
Non-IID Federated Learning [86.59588262014456]
Federated Learning (FL) is an emerging distributed learning paradigm under privacy constraint.
We propose a data-free knowledge distillation method to fine-tune the global model in the server (FedFTG)
Our FedFTG significantly outperforms the state-of-the-art (SOTA) FL algorithms and can serve as a strong plugin for enhancing FedAvg, FedProx, FedDyn, and SCAFFOLD.
arXiv Detail & Related papers (2022-03-17T11:18:17Z) - Dynamic Attention-based Communication-Efficient Federated Learning [85.18941440826309]
Federated learning (FL) offers a solution to train a global machine learning model.
FL suffers performance degradation when client data distribution is non-IID.
We propose a new adaptive training algorithm $textttAdaFL$ to combat this degradation.
arXiv Detail & Related papers (2021-08-12T14:18:05Z) - FedMix: Approximation of Mixup under Mean Augmented Federated Learning [60.503258658382]
Federated learning (FL) allows edge devices to collectively learn a model without directly sharing data within each device.
Current state-of-the-art algorithms suffer from performance degradation as the heterogeneity of local data across clients increases.
We propose a new augmentation algorithm, named FedMix, which is inspired by a phenomenal yet simple data augmentation method, Mixup.
arXiv Detail & Related papers (2021-07-01T06:14:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.