Real-time End-to-End Federated Learning: An Automotive Case Study
- URL: http://arxiv.org/abs/2103.11879v1
- Date: Mon, 22 Mar 2021 14:16:16 GMT
- Title: Real-time End-to-End Federated Learning: An Automotive Case Study
- Authors: Hongyi Zhang, Jan Bosch, Helena Holmstr\"om Olsson
- Abstract summary: We introduce an approach to real-time end-to-end Federated Learning combined with a novel asynchronous model aggregation protocol.
Our results show that asynchronous Federated Learning can significantly improve the prediction performance of local edge models and reach the same accuracy level as the centralized machine learning method.
- Score: 16.79939549201032
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With the development and the increasing interests in ML/DL fields, companies
are eager to utilize these methods to improve their service quality and user
experience. Federated Learning has been introduced as an efficient model
training approach to distribute and speed up time-consuming model training and
preserve user data privacy. However, common Federated Learning methods apply a
synchronized protocol to perform model aggregation, which turns out to be
inflexible and unable to adapt to rapidly evolving environments and
heterogeneous hardware settings in real-world systems. In this paper, we
introduce an approach to real-time end-to-end Federated Learning combined with
a novel asynchronous model aggregation protocol. We validate our approach in an
industrial use case in the automotive domain focusing on steering wheel angle
prediction for autonomous driving. Our results show that asynchronous Federated
Learning can significantly improve the prediction performance of local edge
models and reach the same accuracy level as the centralized machine learning
method. Moreover, the approach can reduce the communication overhead,
accelerate model training speed and consume real-time streaming data by
utilizing a sliding training window, which proves high efficiency when
deploying ML/DL components to heterogeneous real-world embedded systems.
Related papers
- Modality Alignment Meets Federated Broadcasting [9.752555511824593]
Federated learning (FL) has emerged as a powerful approach to safeguard data privacy by training models across distributed edge devices without centralizing local data.
This paper introduces a novel FL framework leveraging modality alignment, where a text encoder resides on the server, and image encoders operate on local devices.
arXiv Detail & Related papers (2024-11-24T13:30:03Z) - Boosting Continual Learning of Vision-Language Models via Mixture-of-Experts Adapters [65.15700861265432]
We present a parameter-efficient continual learning framework to alleviate long-term forgetting in incremental learning with vision-language models.
Our approach involves the dynamic expansion of a pre-trained CLIP model, through the integration of Mixture-of-Experts (MoE) adapters.
To preserve the zero-shot recognition capability of vision-language models, we introduce a Distribution Discriminative Auto-Selector.
arXiv Detail & Related papers (2024-03-18T08:00:23Z) - Federated Learning based on Pruning and Recovery [0.0]
This framework integrates asynchronous learning algorithms and pruning techniques.
It addresses the inefficiencies of traditional federated learning algorithms in scenarios involving heterogeneous devices.
It also tackles the staleness issue and inadequate training of certain clients in asynchronous algorithms.
arXiv Detail & Related papers (2024-03-16T14:35:03Z) - Federated Learning with Projected Trajectory Regularization [65.6266768678291]
Federated learning enables joint training of machine learning models from distributed clients without sharing their local data.
One key challenge in federated learning is to handle non-identically distributed data across the clients.
We propose a novel federated learning framework with projected trajectory regularization (FedPTR) for tackling the data issue.
arXiv Detail & Related papers (2023-12-22T02:12:08Z) - VREM-FL: Mobility-Aware Computation-Scheduling Co-Design for Vehicular Federated Learning [2.6322811557798746]
vehicular radio environment map federated learning (VREM-FL) is proposed.
It combines mobility of vehicles with 5G radio environment maps.
VREM-FL can be tuned to trade training time for radio resource usage.
arXiv Detail & Related papers (2023-11-30T17:38:54Z) - FedLALR: Client-Specific Adaptive Learning Rates Achieve Linear Speedup
for Non-IID Data [54.81695390763957]
Federated learning is an emerging distributed machine learning method.
We propose a heterogeneous local variant of AMSGrad, named FedLALR, in which each client adjusts its learning rate.
We show that our client-specified auto-tuned learning rate scheduling can converge and achieve linear speedup with respect to the number of clients.
arXiv Detail & Related papers (2023-09-18T12:35:05Z) - Personalizing Federated Learning with Over-the-Air Computations [84.8089761800994]
Federated edge learning is a promising technology to deploy intelligence at the edge of wireless networks in a privacy-preserving manner.
Under such a setting, multiple clients collaboratively train a global generic model under the coordination of an edge server.
This paper presents a distributed training paradigm that employs analog over-the-air computation to address the communication bottleneck.
arXiv Detail & Related papers (2023-02-24T08:41:19Z) - Time-sensitive Learning for Heterogeneous Federated Edge Intelligence [52.83633954857744]
We investigate real-time machine learning in a federated edge intelligence (FEI) system.
FEI systems exhibit heterogenous communication and computational resource distribution.
We propose a time-sensitive federated learning (TS-FL) framework to minimize the overall run-time for collaboratively training a shared ML model.
arXiv Detail & Related papers (2023-01-26T08:13:22Z) - An Efficient and Reliable Asynchronous Federated Learning Scheme for
Smart Public Transportation [24.8522516507395]
Federated learning (FL) is a distributed machine learning scheme that allows vehicles to receive continuous model updates without having to upload raw data to the cloud.
This paper offers a blockchain-based asynchronous federated learning scheme with a dynamic scaling factor (DBAFL)
Experiments conducted on heterogeneous devices validate outperformed learning performance, efficiency, and reliability of DBAFL.
arXiv Detail & Related papers (2022-08-15T13:56:29Z) - Fast-Convergent Federated Learning [82.32029953209542]
Federated learning is a promising solution for distributing machine learning tasks through modern networks of mobile devices.
We propose a fast-convergent federated learning algorithm, called FOLB, which performs intelligent sampling of devices in each round of model training.
arXiv Detail & Related papers (2020-07-26T14:37:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.