Asynchronous Federated Learning for Edge-assisted Vehicular Networks
- URL: http://arxiv.org/abs/2208.01901v1
- Date: Wed, 3 Aug 2022 08:05:02 GMT
- Title: Asynchronous Federated Learning for Edge-assisted Vehicular Networks
- Authors: Siyuan Wang, Qiong Wu, Qiang Fan, Cui Zhang and Zhengquan Li
- Abstract summary: Vehicular networks enable vehicles to support real-time vehicular applications through training data.
For the traditional federated learning (FL), vehicles train the data locally to obtain a local model and then upload the local model to the RSU to update the global model.
The traditional FL updates the global model synchronously, i.e., the RSU needs to wait for all vehicles to upload their models for the global model updating.
It is necessary to propose an asynchronous federated learning (AFL) to solve this problem, where the RSU updates the global model once it receives a local model from a vehicle
- Score: 7.624367655819205
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Vehicular networks enable vehicles support real-time vehicular applications
through training data. Due to the limited computing capability, vehicles
usually transmit data to a road side unit (RSU) at the network edge to process
data. However, vehicles are usually reluctant to share data with each other due
to the privacy issue. For the traditional federated learning (FL), vehicles
train the data locally to obtain a local model and then upload the local model
to the RSU to update the global model, thus the data privacy can be protected
through sharing model parameters instead of data. The traditional FL updates
the global model synchronously, i.e., the RSU needs to wait for all vehicles to
upload their models for the global model updating. However, vehicles may
usually drive out of the coverage of the RSU before they obtain their local
models through training, which reduces the accuracy of the global model. It is
necessary to propose an asynchronous federated learning (AFL) to solve this
problem, where the RSU updates the global model once it receives a local model
from a vehicle. However, the amount of data, computing capability and vehicle
mobility may affect the accuracy of the global model. In this paper, we jointly
consider the amount of data, computing capability and vehicle mobility to
design an AFL scheme to improve the accuracy of the global model. Extensive
simulation experiments have demonstrated that our scheme outperforms the FL
scheme
Related papers
- Federated Learning with Projected Trajectory Regularization [65.6266768678291]
Federated learning enables joint training of machine learning models from distributed clients without sharing their local data.
One key challenge in federated learning is to handle non-identically distributed data across the clients.
We propose a novel federated learning framework with projected trajectory regularization (FedPTR) for tackling the data issue.
arXiv Detail & Related papers (2023-12-22T02:12:08Z) - Tunable Soft Prompts are Messengers in Federated Learning [55.924749085481544]
Federated learning (FL) enables multiple participants to collaboratively train machine learning models using decentralized data sources.
The lack of model privacy protection in FL becomes an unneglectable challenge.
We propose a novel FL training approach that accomplishes information exchange among participants via tunable soft prompts.
arXiv Detail & Related papers (2023-11-12T11:01:10Z) - Deep Reinforcement Learning Based Vehicle Selection for Asynchronous
Federated Learning Enabled Vehicular Edge Computing [16.169301221410944]
In the traditional vehicular network, computing tasks generated by the vehicles are usually uploaded to the cloud for processing.
In this paper, we propose a deep reinforcement learning (DRL) based vehicle selection scheme to improve the accuracy of the global model in AFL of vehicular network.
Simulation results demonstrate our scheme can effectively remove the bad nodes and improve the aggregation accuracy of the global model.
arXiv Detail & Related papers (2023-04-06T02:40:00Z) - Boost Decentralized Federated Learning in Vehicular Networks by
Diversifying Data Sources [16.342217928468227]
We propose the DFL-DDS (DFL with diversified Data Sources) algorithm to diversify data sources in DFL.
Specifically, each vehicle maintains a state vector to record the contribution weight of each data source to its model.
To boost the convergence of DFL, a vehicle tunes the aggregation weight of each data source by minimizing the KL divergence of its state vector.
arXiv Detail & Related papers (2022-09-05T04:01:41Z) - Mobility, Communication and Computation Aware Federated Learning for
Internet of Vehicles [29.476152044104005]
We propose a novel online FL platform that uses on-road vehicles as learning agents.
Thanks to the advanced features of modern vehicles, the on-board sensors can collect data as vehicles travel along their trajectories.
On-board processors can train machine learning models using the collected data.
arXiv Detail & Related papers (2022-05-17T19:14:38Z) - Federated Learning with Downlink Device Selection [92.14944020945846]
We study federated edge learning, where a global model is trained collaboratively using privacy-sensitive data at the edge of a wireless network.
A parameter server (PS) keeps track of the global model and shares it with the wireless edge devices for training using their private local data.
We consider device selection based on downlink channels over which the PS shares the global model with the devices.
arXiv Detail & Related papers (2021-07-07T22:42:39Z) - Over-the-Air Federated Learning from Heterogeneous Data [107.05618009955094]
Federated learning (FL) is a framework for distributed learning of centralized models.
We develop a Convergent OTA FL (COTAF) algorithm which enhances the common local gradient descent (SGD) FL algorithm.
We numerically show that the precoding induced by COTAF notably improves the convergence rate and the accuracy of models trained via OTA FL.
arXiv Detail & Related papers (2020-09-27T08:28:25Z) - Federated Learning With Quantized Global Model Updates [84.55126371346452]
We study federated learning, which enables mobile devices to utilize their local datasets to train a global model.
We introduce a lossy FL (LFL) algorithm, in which both the global model and the local model updates are quantized before being transmitted.
arXiv Detail & Related papers (2020-06-18T16:55:20Z) - Think Locally, Act Globally: Federated Learning with Local and Global
Representations [92.68484710504666]
Federated learning is a method of training models on private data distributed over multiple devices.
We propose a new federated learning algorithm that jointly learns compact local representations on each device.
We also evaluate on the task of personalized mood prediction from real-world mobile data where privacy is key.
arXiv Detail & Related papers (2020-01-06T12:40:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.