HiFlash: Communication-Efficient Hierarchical Federated Learning with
Adaptive Staleness Control and Heterogeneity-aware Client-Edge Association
- URL: http://arxiv.org/abs/2301.06447v1
- Date: Mon, 16 Jan 2023 14:39:04 GMT
- Title: HiFlash: Communication-Efficient Hierarchical Federated Learning with
Adaptive Staleness Control and Heterogeneity-aware Client-Edge Association
- Authors: Qiong Wu and Xu Chen and Tao Ouyang and Zhi Zhou and Xiaoxi Zhang and
Shusen Yang and Junshan Zhang
- Abstract summary: Federated learning (FL) is a promising paradigm that enables collaboratively learning a shared model across massive clients.
For many existing FL systems, clients need to frequently exchange model parameters of large data size with the remote cloud server directly via wide-area networks (WAN)
We resort to the hierarchical federated learning paradigm of HiFL, which reaps the benefits of mobile edge computing.
- Score: 38.99309610943313
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning (FL) is a promising paradigm that enables collaboratively
learning a shared model across massive clients while keeping the training data
locally. However, for many existing FL systems, clients need to frequently
exchange model parameters of large data size with the remote cloud server
directly via wide-area networks (WAN), leading to significant communication
overhead and long transmission time. To mitigate the communication bottleneck,
we resort to the hierarchical federated learning paradigm of HiFL, which reaps
the benefits of mobile edge computing and combines synchronous client-edge
model aggregation and asynchronous edge-cloud model aggregation together to
greatly reduce the traffic volumes of WAN transmissions. Specifically, we first
analyze the convergence bound of HiFL theoretically and identify the key
controllable factors for model performance improvement. We then advocate an
enhanced design of HiFlash by innovatively integrating deep reinforcement
learning based adaptive staleness control and heterogeneity-aware client-edge
association strategy to boost the system efficiency and mitigate the staleness
effect without compromising model accuracy. Extensive experiments corroborate
the superior performance of HiFlash in model accuracy, communication reduction,
and system efficiency.
Related papers
- FedPAE: Peer-Adaptive Ensemble Learning for Asynchronous and Model-Heterogeneous Federated Learning [9.084674176224109]
Federated learning (FL) enables multiple clients with distributed data sources to collaboratively train a shared model without compromising data privacy.
We introduce Federated Peer-Adaptive Ensemble Learning (FedPAE), a fully decentralized pFL algorithm that supports model heterogeneity and asynchronous learning.
Our approach utilizes a peer-to-peer model sharing mechanism and ensemble selection to achieve a more refined balance between local and global information.
arXiv Detail & Related papers (2024-10-17T22:47:19Z) - Efficient Model Compression for Hierarchical Federated Learning [10.37403547348343]
Federated learning (FL) has garnered significant attention due to its capacity to preserve privacy within distributed learning systems.
This paper introduces a novel hierarchical FL framework that integrates the benefits of clustered FL and model compression.
arXiv Detail & Related papers (2024-05-27T12:17:47Z) - Adaptive Hybrid Model Pruning in Federated Learning through Loss Exploration [17.589308358508863]
We introduce AutoFLIP, an innovative approach that utilizes a federated loss exploration phase to drive adaptive hybrid pruning.
We show that AutoFLIP not only efficiently accelerates global convergence, but also achieves superior accuracy and robustness compared to traditional methods.
arXiv Detail & Related papers (2024-05-16T17:27:41Z) - Communication-Efficient Federated Learning through Adaptive Weight
Clustering and Server-Side Distillation [10.541541376305245]
Federated Learning (FL) is a promising technique for the collaborative training of deep neural networks across multiple devices.
FL is hindered by excessive communication costs due to repeated server-client communication during training.
We propose FedCompress, a novel approach that combines dynamic weight clustering and server-side knowledge distillation.
arXiv Detail & Related papers (2024-01-25T14:49:15Z) - Adaptive Model Pruning and Personalization for Federated Learning over
Wireless Networks [72.59891661768177]
Federated learning (FL) enables distributed learning across edge devices while protecting data privacy.
We consider a FL framework with partial model pruning and personalization to overcome these challenges.
This framework splits the learning model into a global part with model pruning shared with all devices to learn data representations and a personalized part to be fine-tuned for a specific device.
arXiv Detail & Related papers (2023-09-04T21:10:45Z) - Adaptive Federated Pruning in Hierarchical Wireless Networks [69.6417645730093]
Federated Learning (FL) is a privacy-preserving distributed learning framework where a server aggregates models updated by multiple devices without accessing their private datasets.
In this paper, we introduce model pruning for HFL in wireless networks to reduce the neural network scale.
We show that our proposed HFL with model pruning achieves similar learning accuracy compared with the HFL without model pruning and reduces about 50 percent communication cost.
arXiv Detail & Related papers (2023-05-15T22:04:49Z) - Vertical Federated Learning over Cloud-RAN: Convergence Analysis and
System Optimization [82.12796238714589]
We propose a novel cloud radio access network (Cloud-RAN) based vertical FL system to enable fast and accurate model aggregation.
We characterize the convergence behavior of the vertical FL algorithm considering both uplink and downlink transmissions.
We establish a system optimization framework by joint transceiver and fronthaul quantization design, for which successive convex approximation and alternate convex search based system optimization algorithms are developed.
arXiv Detail & Related papers (2023-05-04T09:26:03Z) - Personalizing Federated Learning with Over-the-Air Computations [84.8089761800994]
Federated edge learning is a promising technology to deploy intelligence at the edge of wireless networks in a privacy-preserving manner.
Under such a setting, multiple clients collaboratively train a global generic model under the coordination of an edge server.
This paper presents a distributed training paradigm that employs analog over-the-air computation to address the communication bottleneck.
arXiv Detail & Related papers (2023-02-24T08:41:19Z) - Federated Learning over Wireless IoT Networks with Optimized
Communication and Resources [98.18365881575805]
Federated learning (FL) as a paradigm of collaborative learning techniques has obtained increasing research attention.
It is of interest to investigate fast responding and accurate FL schemes over wireless systems.
We show that the proposed communication-efficient federated learning framework converges at a strong linear rate.
arXiv Detail & Related papers (2021-10-22T13:25:57Z) - Reconfigurable Intelligent Surface Enabled Federated Learning: A Unified
Communication-Learning Design Approach [30.1988598440727]
We develop a unified communication-learning optimization problem to jointly optimize device selection, over-the-air transceiver design, and RIS configuration.
Numerical experiments show that the proposed design achieves substantial learning accuracy improvement compared with the state-of-the-art approaches.
arXiv Detail & Related papers (2020-11-20T08:54:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.