UniLiDAR: Bridge the domain gap among different LiDARs for continual
learning
- URL: http://arxiv.org/abs/2403.08512v1
- Date: Wed, 13 Mar 2024 13:23:05 GMT
- Title: UniLiDAR: Bridge the domain gap among different LiDARs for continual
learning
- Authors: Zikun Xu, Jianqiang Wang, Shaobing Xu
- Abstract summary: This paper aims to develop a unified model capable of handling different LiDARs.
We propose UniLiDAR, an occupancy prediction pipeline that leverages geometric realignment and semantic label mapping.
UniLiDAR elevates the mIoU of occupancy prediction by 15.7% and 12.5%, respectively, compared to the model trained on the directly merged dataset.
- Score: 10.10834581581264
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: LiDAR-based 3D perception algorithms have evolved rapidly alongside the
emergence of large datasets. Nonetheless, considerable performance degradation
often ensues when models trained on a specific dataset are applied to other
datasets or real-world scenarios with different LiDAR. This paper aims to
develop a unified model capable of handling different LiDARs, enabling
continual learning across diverse LiDAR datasets and seamless deployment across
heterogeneous platforms. We observe that the gaps among datasets primarily
manifest in geometric disparities (such as variations in beams and point
counts) and semantic inconsistencies (taxonomy conflicts). To this end, this
paper proposes UniLiDAR, an occupancy prediction pipeline that leverages
geometric realignment and semantic label mapping to facilitate multiple
datasets training and mitigate performance degradation during deployment on
heterogeneous platforms. Moreover, our method can be easily combined with
existing 3D perception models. The efficacy of the proposed approach in
bridging LiDAR domain gaps is verified by comprehensive experiments on two
prominent datasets: OpenOccupancy-nuScenes and SemanticKITTI. UniLiDAR elevates
the mIoU of occupancy prediction by 15.7% and 12.5%, respectively, compared to
the model trained on the directly merged dataset. Moreover, it outperforms
several SOTA methods trained on individual datasets. We expect our research to
facilitate further study of 3D generalization, the code will be available soon.
Related papers
- ARC: A Generalist Graph Anomaly Detector with In-Context Learning [62.202323209244]
ARC is a generalist GAD approach that enables a one-for-all'' GAD model to detect anomalies across various graph datasets on-the-fly.
equipped with in-context learning, ARC can directly extract dataset-specific patterns from the target dataset.
Extensive experiments on multiple benchmark datasets from various domains demonstrate the superior anomaly detection performance, efficiency, and generalizability of ARC.
arXiv Detail & Related papers (2024-05-27T02:42:33Z) - Multi-Modal Data-Efficient 3D Scene Understanding for Autonomous Driving [58.16024314532443]
We introduce LaserMix++, a framework that integrates laser beam manipulations from disparate LiDAR scans and incorporates LiDAR-camera correspondences to assist data-efficient learning.
Results demonstrate that LaserMix++ outperforms fully supervised alternatives, achieving comparable accuracy with five times fewer annotations.
This substantial advancement underscores the potential of semi-supervised approaches in reducing the reliance on extensive labeled data in LiDAR-based 3D scene understanding systems.
arXiv Detail & Related papers (2024-05-08T17:59:53Z) - Just Add $100 More: Augmenting NeRF-based Pseudo-LiDAR Point Cloud for Resolving Class-imbalance Problem [12.26293873825084]
We propose to leverage pseudo-LiDAR point clouds generated from videos capturing a surround view of miniatures or real-world objects of minor classes.
Our method, called Pseudo Ground Truth Augmentation (PGT-Aug), consists of three main steps: (i) volumetric 3D instance reconstruction using a 2D-to-3D view synthesis model, (ii) object-level domain alignment with LiDAR intensity estimation, and (iii) a hybrid context-aware placement method from ground and map information.
arXiv Detail & Related papers (2024-03-18T08:50:04Z) - Joint Distributional Learning via Cramer-Wold Distance [0.7614628596146602]
We introduce the Cramer-Wold distance regularization, which can be computed in a closed-form, to facilitate joint distributional learning for high-dimensional datasets.
We also introduce a two-step learning method to enable flexible prior modeling and improve the alignment between the aggregated posterior and the prior distribution.
arXiv Detail & Related papers (2023-10-25T05:24:23Z) - SPOT: Scalable 3D Pre-training via Occupancy Prediction for Learning Transferable 3D Representations [76.45009891152178]
Pretraining-finetuning approach can alleviate the labeling burden by fine-tuning a pre-trained backbone across various downstream datasets as well as tasks.
We show, for the first time, that general representations learning can be achieved through the task of occupancy prediction.
Our findings will facilitate the understanding of LiDAR points and pave the way for future advancements in LiDAR pre-training.
arXiv Detail & Related papers (2023-09-19T11:13:01Z) - Towards Large-scale 3D Representation Learning with Multi-dataset Point Prompt Training [44.790636524264]
Point Prompt Training is a novel framework for multi-dataset synergistic learning in the context of 3D representation learning.
It can overcome the negative transfer associated with synergistic learning and produce generalizable representations.
It achieves state-of-the-art performance on each dataset using a single weight-shared model with supervised multi-dataset training.
arXiv Detail & Related papers (2023-08-18T17:59:57Z) - MV-JAR: Masked Voxel Jigsaw and Reconstruction for LiDAR-Based
Self-Supervised Pre-Training [58.07391711548269]
Masked Voxel Jigsaw and Reconstruction (MV-JAR) method for LiDAR-based self-supervised pre-training.
Masked Voxel Jigsaw and Reconstruction (MV-JAR) method for LiDAR-based self-supervised pre-training.
arXiv Detail & Related papers (2023-03-23T17:59:02Z) - Uni3D: A Unified Baseline for Multi-dataset 3D Object Detection [34.2238222373818]
Current 3D object detection models follow a single dataset-specific training and testing paradigm.
In this paper, we study the task of training a unified 3D detector from multiple datasets.
We present a Uni3D which leverages a simple data-level correction operation and a designed semantic-level coupling-and-recoupling module.
arXiv Detail & Related papers (2023-03-13T05:54:13Z) - LiDAR Distillation: Bridging the Beam-Induced Domain Gap for 3D Object
Detection [96.63947479020631]
In many real-world applications, the LiDAR points used by mass-produced robots and vehicles usually have fewer beams than that in large-scale public datasets.
We propose the LiDAR Distillation to bridge the domain gap induced by different LiDAR beams for 3D object detection.
arXiv Detail & Related papers (2022-03-28T17:59:02Z) - SelfVoxeLO: Self-supervised LiDAR Odometry with Voxel-based Deep Neural
Networks [81.64530401885476]
We propose a self-supervised LiDAR odometry method, dubbed SelfVoxeLO, to tackle these two difficulties.
Specifically, we propose a 3D convolution network to process the raw LiDAR data directly, which extracts features that better encode the 3D geometric patterns.
We evaluate our method's performances on two large-scale datasets, i.e., KITTI and Apollo-SouthBay.
arXiv Detail & Related papers (2020-10-19T09:23:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.