FedDrive: Generalizing Federated Learning to Semantic Segmentation in
Autonomous Driving
- URL: http://arxiv.org/abs/2202.13670v3
- Date: Sat, 23 Sep 2023 10:44:58 GMT
- Title: FedDrive: Generalizing Federated Learning to Semantic Segmentation in
Autonomous Driving
- Authors: Lidia Fantauzzo, Eros Fan\`i, Debora Caldarola, Antonio Tavera, Fabio
Cermelli, Marco Ciccone, Barbara Caputo
- Abstract summary: Federated learning aims to learn a global model while preserving privacy and leveraging data on millions of remote devices.
We propose FedDrive, a new benchmark consisting of three settings and two datasets.
We benchmark state-of-the-art algorithms from the federated learning literature through an in-depth analysis.
- Score: 27.781734303644516
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Semantic Segmentation is essential to make self-driving vehicles autonomous,
enabling them to understand their surroundings by assigning individual pixels
to known categories. However, it operates on sensible data collected from the
users' cars; thus, protecting the clients' privacy becomes a primary concern.
For similar reasons, Federated Learning has been recently introduced as a new
machine learning paradigm aiming to learn a global model while preserving
privacy and leveraging data on millions of remote devices. Despite several
efforts on this topic, no work has explicitly addressed the challenges of
federated learning in semantic segmentation for driving so far. To fill this
gap, we propose FedDrive, a new benchmark consisting of three settings and two
datasets, incorporating the real-world challenges of statistical heterogeneity
and domain generalization. We benchmark state-of-the-art algorithms from the
federated learning literature through an in-depth analysis, combining them with
style transfer methods to improve their generalization ability. We demonstrate
that correctly handling normalization statistics is crucial to deal with the
aforementioned challenges. Furthermore, style transfer improves performance
when dealing with significant appearance shifts. Official website:
https://feddrive.github.io.
Related papers
- FedDrive v2: an Analysis of the Impact of Label Skewness in Federated
Semantic Segmentation for Autonomous Driving [26.99151955856939]
We propose FedDrive v2, an extension of the Federated Learning benchmark for Semantic in Autonomous Driving.
While the first version aims at studying the effect of domain shift of the visual features across clients, in this work, we focus on the distribution skewness of the labels.
We propose six new scenarios to investigate how label skewness affects the performance of segmentation models and compare it with the effect of domain shift.
arXiv Detail & Related papers (2023-09-23T10:58:08Z) - Recent Advancements in End-to-End Autonomous Driving using Deep
Learning: A Survey [9.385936248154987]
End-to-End driving is a promising paradigm as it circumvents the drawbacks associated with modular systems.
Recent developments in End-to-End autonomous driving are analyzed, and research is categorized based on underlying principles.
This paper assesses the state-of-the-art, identifies challenges, and explores future possibilities.
arXiv Detail & Related papers (2023-07-10T07:00:06Z) - Penalty-Based Imitation Learning With Cross Semantics Generation Sensor
Fusion for Autonomous Driving [1.2749527861829049]
In this paper, we provide a penalty-based imitation learning approach to integrate multiple modalities of information.
We observe a remarkable increase in the driving score by more than 12% when compared to the state-of-the-art (SOTA) model, InterFuser.
Our model achieves this performance enhancement while achieving a 7-fold increase in inference speed and reducing the model size by approximately 30%.
arXiv Detail & Related papers (2023-03-21T14:29:52Z) - Exploring Contextual Representation and Multi-Modality for End-to-End
Autonomous Driving [58.879758550901364]
Recent perception systems enhance spatial understanding with sensor fusion but often lack full environmental context.
We introduce a framework that integrates three cameras to emulate the human field of view, coupled with top-down bird-eye-view semantic data to enhance contextual representation.
Our method achieves displacement error by 0.67m in open-loop settings, surpassing current methods by 6.9% on the nuScenes dataset.
arXiv Detail & Related papers (2022-10-13T05:56:20Z) - Towards Autonomous Driving of Personal Mobility with Small and Noisy
Dataset using Tsallis-statistics-based Behavioral Cloning [1.7970523486905976]
This study focuses on an autonomous driving method for the personal mobility with such a small and noisy, so-called personal, dataset.
Specifically, we introduce a new loss function based on Tsallis statistics that weights gradients depending on the original loss function.
In addition, we improve the visualization technique to verify whether the driver and the controller have the same region of interest.
arXiv Detail & Related papers (2021-11-29T01:56:12Z) - Towards Optimal Strategies for Training Self-Driving Perception Models
in Simulation [98.51313127382937]
We focus on the use of labels in the synthetic domain alone.
Our approach introduces both a way to learn neural-invariant representations and a theoretically inspired view on how to sample the data from the simulator.
We showcase our approach on the bird's-eye-view vehicle segmentation task with multi-sensor data.
arXiv Detail & Related papers (2021-11-15T18:37:43Z) - Diverse Complexity Measures for Dataset Curation in Self-driving [80.55417232642124]
We propose a new data selection method that exploits a diverse set of criteria that quantize interestingness of traffic scenes.
Our experiments show that the proposed curation pipeline is able to select datasets that lead to better generalization and higher performance.
arXiv Detail & Related papers (2021-01-16T23:45:02Z) - Fine-Grained Vehicle Perception via 3D Part-Guided Visual Data
Augmentation [77.60050239225086]
We propose an effective training data generation process by fitting a 3D car model with dynamic parts to vehicles in real images.
Our approach is fully automatic without any human interaction.
We present a multi-task network for VUS parsing and a multi-stream network for VHI parsing.
arXiv Detail & Related papers (2020-12-15T03:03:38Z) - Learning Accurate and Human-Like Driving using Semantic Maps and
Attention [152.48143666881418]
This paper investigates how end-to-end driving models can be improved to drive more accurately and human-like.
We exploit semantic and visual maps from HERE Technologies and augment the existing Drive360 dataset with such.
Our models are trained and evaluated on the Drive360 + HERE dataset, which features 60 hours and 3000 km of real-world driving data.
arXiv Detail & Related papers (2020-07-10T22:25:27Z) - Explicit Domain Adaptation with Loosely Coupled Samples [85.9511585604837]
We propose a transfer learning framework, core of which is learning an explicit mapping between domains.
Due to its interpretability, this is beneficial for safety-critical applications, like autonomous driving.
arXiv Detail & Related papers (2020-04-24T21:23:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.