GIPSO: Geometrically Informed Propagation for Online Adaptation in 3D
LiDAR Segmentation
- URL: http://arxiv.org/abs/2207.09763v1
- Date: Wed, 20 Jul 2022 09:06:07 GMT
- Title: GIPSO: Geometrically Informed Propagation for Online Adaptation in 3D
LiDAR Segmentation
- Authors: Cristiano Saltori, Evgeny Krivosheev, St\'ephane Lathuili\`ere, Nicu
Sebe, Fabio Galasso, Giuseppe Fiameni, Elisa Ricci, Fabio Poiesi
- Abstract summary: 3D point cloud semantic segmentation is fundamental for autonomous driving.
Most approaches in the literature neglect an important aspect, i.e., how to deal with domain shift when handling dynamic scenes.
This paper advances the state of the art in this research field.
- Score: 60.07812405063708
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: 3D point cloud semantic segmentation is fundamental for autonomous driving.
Most approaches in the literature neglect an important aspect, i.e., how to
deal with domain shift when handling dynamic scenes. This can significantly
hinder the navigation capabilities of self-driving vehicles. This paper
advances the state of the art in this research field. Our first contribution
consists in analysing a new unexplored scenario in point cloud segmentation,
namely Source-Free Online Unsupervised Domain Adaptation (SF-OUDA). We
experimentally show that state-of-the-art methods have a rather limited ability
to adapt pre-trained deep network models to unseen domains in an online manner.
Our second contribution is an approach that relies on adaptive self-training
and geometric-feature propagation to adapt a pre-trained source model online
without requiring either source data or target labels. Our third contribution
is to study SF-OUDA in a challenging setup where source data is synthetic and
target data is point clouds captured in the real world. We use the recent
SynLiDAR dataset as a synthetic source and introduce two new synthetic (source)
datasets, which can stimulate future synthetic-to-real autonomous driving
research. Our experiments show the effectiveness of our segmentation approach
on thousands of real-world point clouds. Code and synthetic datasets are
available at https://github.com/saltoricristiano/gipso-sfouda.
Related papers
- Syn-to-Real Unsupervised Domain Adaptation for Indoor 3D Object Detection [50.448520056844885]
We propose a novel framework for syn-to-real unsupervised domain adaptation in indoor 3D object detection.
Our adaptation results from synthetic dataset 3D-FRONT to real-world datasets ScanNetV2 and SUN RGB-D demonstrate remarkable mAP25 improvements of 9.7% and 9.1% over Source-Only baselines.
arXiv Detail & Related papers (2024-06-17T08:18:41Z) - From Synthetic to Real: Unveiling the Power of Synthetic Data for Video
Person Re-ID [15.81210364737776]
We study a new problem of cross-domain video based person re-identification (Re-ID)
We take the synthetic video dataset as the source domain for training and use the real-world videos for testing.
We are surprised to find that the synthetic data performs even better than the real data in the cross-domain setting.
arXiv Detail & Related papers (2024-02-03T10:19:21Z) - Compositional Semantic Mix for Domain Adaptation in Point Cloud
Segmentation [65.78246406460305]
compositional semantic mixing represents the first unsupervised domain adaptation technique for point cloud segmentation.
We present a two-branch symmetric network architecture capable of concurrently processing point clouds from a source domain (e.g. synthetic) and point clouds from a target domain (e.g. real-world)
arXiv Detail & Related papers (2023-08-28T14:43:36Z) - A New Benchmark: On the Utility of Synthetic Data with Blender for Bare
Supervised Learning and Downstream Domain Adaptation [42.2398858786125]
Deep learning in computer vision has achieved great success with the price of large-scale labeled training data.
The uncontrollable data collection process produces non-IID training and test data, where undesired duplication may exist.
To circumvent them, an alternative is to generate synthetic data via 3D rendering with domain randomization.
arXiv Detail & Related papers (2023-03-16T09:03:52Z) - Domain Adaptation of Synthetic Driving Datasets for Real-World
Autonomous Driving [0.11470070927586014]
Network trained with synthetic data for certain computer vision tasks degrade significantly when tested on real world data.
In this paper, we propose and evaluate novel ways for the betterment of such approaches.
We propose a novel method to efficiently incorporate semantic supervision into this pair selection, which helps in boosting the performance of the model.
arXiv Detail & Related papers (2023-02-08T15:51:54Z) - Deformation and Correspondence Aware Unsupervised Synthetic-to-Real
Scene Flow Estimation for Point Clouds [43.792032657561236]
We develop a point cloud collector and scene flow annotator for GTA-V engine to automatically obtain diverse training samples without human intervention.
We propose a mean-teacher-based domain adaptation framework that leverages self-generated pseudo-labels of the target domain.
Our framework achieves superior adaptation performance on six source-target dataset pairs, remarkably closing the average domain gap by 60%.
arXiv Detail & Related papers (2022-03-31T09:03:23Z) - Towards Optimal Strategies for Training Self-Driving Perception Models
in Simulation [98.51313127382937]
We focus on the use of labels in the synthetic domain alone.
Our approach introduces both a way to learn neural-invariant representations and a theoretically inspired view on how to sample the data from the simulator.
We showcase our approach on the bird's-eye-view vehicle segmentation task with multi-sensor data.
arXiv Detail & Related papers (2021-11-15T18:37:43Z) - Unsupervised Domain Adaptive Learning via Synthetic Data for Person
Re-identification [101.1886788396803]
Person re-identification (re-ID) has gained more and more attention due to its widespread applications in video surveillance.
Unfortunately, the mainstream deep learning methods still need a large quantity of labeled data to train models.
In this paper, we develop a data collector to automatically generate synthetic re-ID samples in a computer game, and construct a data labeler to simultaneously annotate them.
arXiv Detail & Related papers (2021-09-12T15:51:41Z) - ePointDA: An End-to-End Simulation-to-Real Domain Adaptation Framework
for LiDAR Point Cloud Segmentation [111.56730703473411]
Training deep neural networks (DNNs) on LiDAR data requires large-scale point-wise annotations.
Simulation-to-real domain adaptation (SRDA) trains a DNN using unlimited synthetic data with automatically generated labels.
ePointDA consists of three modules: self-supervised dropout noise rendering, statistics-invariant and spatially-adaptive feature alignment, and transferable segmentation learning.
arXiv Detail & Related papers (2020-09-07T23:46:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.