Communication Resources Constrained Hierarchical Federated Learning for
End-to-End Autonomous Driving
- URL: http://arxiv.org/abs/2306.16169v1
- Date: Wed, 28 Jun 2023 12:44:59 GMT
- Title: Communication Resources Constrained Hierarchical Federated Learning for
End-to-End Autonomous Driving
- Authors: Wei-Bin Kou, Shuai Wang, Guangxu Zhu, Bin Luo, Yingxian Chen, Derrick
Wing Kwan Ng, and Yik-Chung Wu
- Abstract summary: This paper proposes an optimization-based Communication Resource Constrained Hierarchical Federated Learning framework.
Results show that the proposed CRCHFL both accelerates the convergence rate and enhances the generalization of federated learning autonomous driving model.
- Score: 67.78611905156808
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While federated learning (FL) improves the generalization of end-to-end
autonomous driving by model aggregation, the conventional single-hop FL (SFL)
suffers from slow convergence rate due to long-range communications among
vehicles and cloud server. Hierarchical federated learning (HFL) overcomes such
drawbacks via introduction of mid-point edge servers. However, the
orchestration between constrained communication resources and HFL performance
becomes an urgent problem. This paper proposes an optimization-based
Communication Resource Constrained Hierarchical Federated Learning (CRCHFL)
framework to minimize the generalization error of the autonomous driving model
using hybrid data and model aggregation. The effectiveness of the proposed
CRCHFL is evaluated in the Car Learning to Act (CARLA) simulation platform.
Results show that the proposed CRCHFL both accelerates the convergence rate and
enhances the generalization of federated learning autonomous driving model.
Moreover, under the same communication resource budget, it outperforms the HFL
by 10.33% and the SFL by 12.44%.
Related papers
- Model Partition and Resource Allocation for Split Learning in Vehicular Edge Networks [24.85135243655983]
This paper proposes a novel U-shaped split federated learning (U-SFL) framework to address these challenges.
U-SFL is able to enhance privacy protection by keeping both raw data and labels on the vehicular user (VU) side.
To optimize communication efficiency, we introduce a semantic-aware auto-encoder (SAE) that significantly reduces the dimensionality of transmitted data.
arXiv Detail & Related papers (2024-11-11T07:59:13Z) - Adaptive Federated Pruning in Hierarchical Wireless Networks [69.6417645730093]
Federated Learning (FL) is a privacy-preserving distributed learning framework where a server aggregates models updated by multiple devices without accessing their private datasets.
In this paper, we introduce model pruning for HFL in wireless networks to reduce the neural network scale.
We show that our proposed HFL with model pruning achieves similar learning accuracy compared with the HFL without model pruning and reduces about 50 percent communication cost.
arXiv Detail & Related papers (2023-05-15T22:04:49Z) - Automated Federated Learning in Mobile Edge Networks -- Fast Adaptation
and Convergence [83.58839320635956]
Federated Learning (FL) can be used in mobile edge networks to train machine learning models in a distributed manner.
Recent FL has been interpreted within a Model-Agnostic Meta-Learning (MAML) framework, which brings FL significant advantages in fast adaptation and convergence over heterogeneous datasets.
This paper addresses how much benefit MAML brings to FL and how to maximize such benefit over mobile edge networks.
arXiv Detail & Related papers (2023-03-23T02:42:10Z) - Delay-Aware Hierarchical Federated Learning [7.292078085289465]
The paper introduces delay-aware hierarchical federated learning (DFL) to improve the efficiency of distributed machine learning (ML) model training.
During global synchronization, the cloud server consolidates local models with an outdated global model using a convex control algorithm.
Numerical evaluations show DFL's superior performance in terms of faster global model, reduced convergence resource, and evaluations against communication delays.
arXiv Detail & Related papers (2023-03-22T09:23:29Z) - Time-sensitive Learning for Heterogeneous Federated Edge Intelligence [52.83633954857744]
We investigate real-time machine learning in a federated edge intelligence (FEI) system.
FEI systems exhibit heterogenous communication and computational resource distribution.
We propose a time-sensitive federated learning (TS-FL) framework to minimize the overall run-time for collaboratively training a shared ML model.
arXiv Detail & Related papers (2023-01-26T08:13:22Z) - HiFlash: Communication-Efficient Hierarchical Federated Learning with
Adaptive Staleness Control and Heterogeneity-aware Client-Edge Association [38.99309610943313]
Federated learning (FL) is a promising paradigm that enables collaboratively learning a shared model across massive clients.
For many existing FL systems, clients need to frequently exchange model parameters of large data size with the remote cloud server directly via wide-area networks (WAN)
We resort to the hierarchical federated learning paradigm of HiFL, which reaps the benefits of mobile edge computing.
arXiv Detail & Related papers (2023-01-16T14:39:04Z) - Predictive GAN-powered Multi-Objective Optimization for Hybrid Federated
Split Learning [56.125720497163684]
We propose a hybrid federated split learning framework in wireless networks.
We design a parallel computing scheme for model splitting without label sharing, and theoretically analyze the influence of the delayed gradient caused by the scheme on the convergence speed.
arXiv Detail & Related papers (2022-09-02T10:29:56Z) - Decentralized Federated Learning: Balancing Communication and Computing
Costs [21.694468026280806]
Decentralized federated learning (DFL) is a powerful framework of distributed machine learning.
We propose a general decentralized federated learning framework to strike a balance between communication-efficiency and convergence performance.
Experiment results based on MNIST and CIFAR-10 datasets illustrate the superiority of DFL over traditional decentralized SGD methods.
arXiv Detail & Related papers (2021-07-26T09:09:45Z) - Delay Minimization for Federated Learning Over Wireless Communication
Networks [172.42768672943365]
The problem of delay computation for federated learning (FL) over wireless communication networks is investigated.
A bisection search algorithm is proposed to obtain the optimal solution.
Simulation results show that the proposed algorithm can reduce delay by up to 27.3% compared to conventional FL methods.
arXiv Detail & Related papers (2020-07-05T19:00:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.