Deep Federated Learning for Autonomous Driving
- URL: http://arxiv.org/abs/2110.05754v1
- Date: Tue, 12 Oct 2021 06:04:27 GMT
- Title: Deep Federated Learning for Autonomous Driving
- Authors: Anh Nguyen, Tuong Do, Minh Tran, Binh X. Nguyen, Chien Duong, Tu Phan,
Erman Tjiputra, Quang D. Tran
- Abstract summary: We present a new approach to learn autonomous driving policy while respecting privacy concerns.
We propose a peer-to-peer Deep Federated Learning (DFL) approach to train deep architectures in a fully decentralized manner.
We design a new Federated Autonomous Driving network (FADNet) that can improve the model stability, ensure convergence, and handle imbalanced data distribution problems.
- Score: 15.514129425191266
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Autonomous driving is an active research topic in both academia and industry.
However, most of the existing solutions focus on improving the accuracy by
training learnable models with centralized large-scale data. Therefore, these
methods do not take into account the user's privacy. In this paper, we present
a new approach to learn autonomous driving policy while respecting privacy
concerns. We propose a peer-to-peer Deep Federated Learning (DFL) approach to
train deep architectures in a fully decentralized manner and remove the need
for central orchestration. We design a new Federated Autonomous Driving network
(FADNet) that can improve the model stability, ensure convergence, and handle
imbalanced data distribution problems while is being trained with federated
learning methods. Intensively experimental results on three datasets show that
our approach with FADNet and DFL achieves superior accuracy compared with other
recent methods. Furthermore, our approach can maintain privacy by not
collecting user data to a central server.
Related papers
- Blockchain-Enabled Federated Learning Approach for Vehicular Networks [3.1749005168397617]
We propose a practical approach that merges two emerging technologies: Federated Learning (FL) and the vehicular ecosystem.
In this setting, vehicles can learn from each other without compromising privacy while also ensuring data integrity and accountability.
Our method maintains high accuracy, making it a competent solution for preserving data privacy in vehicular networks.
arXiv Detail & Related papers (2023-11-10T19:51:18Z) - Confidence-based federated distillation for vision-based lane-centering [4.071859628309787]
This paper presents a new confidence-based federated distillation method to improve the performance of machine learning for steering angle prediction.
A comprehensive evaluation of vision-based lane centering shows that the proposed approach can outperform FedAvg and FedDF by 11.3% and 9%, respectively.
arXiv Detail & Related papers (2023-06-05T20:16:19Z) - Benchmarking FedAvg and FedCurv for Image Classification Tasks [1.376408511310322]
This paper focuses on the problem of statistical heterogeneity of the data in the same federated network.
Several Federated Learning algorithms, such as FedAvg, FedProx and Federated Curvature (FedCurv) have already been proposed.
As a side product of this work, we release the non-IID version of the datasets we used so to facilitate further comparisons from the FL community.
arXiv Detail & Related papers (2023-03-31T10:13:01Z) - Federated Learning and Meta Learning: Approaches, Applications, and
Directions [94.68423258028285]
In this tutorial, we present a comprehensive review of FL, meta learning, and federated meta learning (FedMeta)
Unlike other tutorial papers, our objective is to explore how FL, meta learning, and FedMeta methodologies can be designed, optimized, and evolved, and their applications over wireless networks.
arXiv Detail & Related papers (2022-10-24T10:59:29Z) - Federated Deep Learning Meets Autonomous Vehicle Perception: Design and
Verification [168.67190934250868]
Federated learning empowered connected autonomous vehicle (FLCAV) has been proposed.
FLCAV preserves privacy while reducing communication and annotation costs.
It is challenging to determine the network resources and road sensor poses for multi-stage training.
arXiv Detail & Related papers (2022-06-03T23:55:45Z) - Concept drift detection and adaptation for federated and continual
learning [55.41644538483948]
Smart devices can collect vast amounts of data from their environment.
This data is suitable for training machine learning models, which can significantly improve their behavior.
In this work, we present a new method, called Concept-Drift-Aware Federated Averaging.
arXiv Detail & Related papers (2021-05-27T17:01:58Z) - SCEI: A Smart-Contract Driven Edge Intelligence Framework for IoT
Systems [15.796325306292134]
Federated learning (FL) enables collaborative training of a shared model on edge devices while maintaining data privacy.
Various personalized approaches have been proposed, but such approaches fail to handle underlying shifts in data distribution.
This paper presents a dynamically optimized personal deep learning scheme based on blockchain and federated learning.
arXiv Detail & Related papers (2021-03-12T02:57:05Z) - Quasi-Global Momentum: Accelerating Decentralized Deep Learning on
Heterogeneous Data [77.88594632644347]
Decentralized training of deep learning models is a key element for enabling data privacy and on-device learning over networks.
In realistic learning scenarios, the presence of heterogeneity across different clients' local datasets poses an optimization challenge.
We propose a novel momentum-based method to mitigate this decentralized training difficulty.
arXiv Detail & Related papers (2021-02-09T11:27:14Z) - Decentralized Federated Learning Preserves Model and Data Privacy [77.454688257702]
We propose a fully decentralized approach, which allows to share knowledge between trained models.
Students are trained on the output of their teachers via synthetically generated input data.
The results show that an untrained student model, trained on the teachers output reaches comparable F1-scores as the teacher.
arXiv Detail & Related papers (2021-02-01T14:38:54Z) - Multi-Center Federated Learning [62.57229809407692]
This paper proposes a novel multi-center aggregation mechanism for federated learning.
It learns multiple global models from the non-IID user data and simultaneously derives the optimal matching between users and centers.
Our experimental results on benchmark datasets show that our method outperforms several popular federated learning methods.
arXiv Detail & Related papers (2020-05-03T09:14:31Z) - Concentrated Differentially Private and Utility Preserving Federated
Learning [24.239992194656164]
Federated learning is a machine learning setting where a set of edge devices collaboratively train a model under the orchestration of a central server.
In this paper, we develop a federated learning approach that addresses the privacy challenge without much degradation on model utility.
We provide a tight end-to-end privacy guarantee of our approach and analyze its theoretical convergence rates.
arXiv Detail & Related papers (2020-03-30T19:20:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.