MV-JAR: Masked Voxel Jigsaw and Reconstruction for LiDAR-Based
Self-Supervised Pre-Training
- URL: http://arxiv.org/abs/2303.13510v1
- Date: Thu, 23 Mar 2023 17:59:02 GMT
- Title: MV-JAR: Masked Voxel Jigsaw and Reconstruction for LiDAR-Based
Self-Supervised Pre-Training
- Authors: Runsen Xu, Tai Wang, Wenwei Zhang, Runjian Chen, Jinkun Cao, Jiangmiao
Pang, Dahua Lin
- Abstract summary: Masked Voxel Jigsaw and Reconstruction (MV-JAR) method for LiDAR-based self-supervised pre-training.
Masked Voxel Jigsaw and Reconstruction (MV-JAR) method for LiDAR-based self-supervised pre-training.
- Score: 58.07391711548269
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: This paper introduces the Masked Voxel Jigsaw and Reconstruction (MV-JAR)
method for LiDAR-based self-supervised pre-training and a carefully designed
data-efficient 3D object detection benchmark on the Waymo dataset. Inspired by
the scene-voxel-point hierarchy in downstream 3D object detectors, we design
masking and reconstruction strategies accounting for voxel distributions in the
scene and local point distributions within the voxel. We employ a
Reversed-Furthest-Voxel-Sampling strategy to address the uneven distribution of
LiDAR points and propose MV-JAR, which combines two techniques for modeling the
aforementioned distributions, resulting in superior performance. Our
experiments reveal limitations in previous data-efficient experiments, which
uniformly sample fine-tuning splits with varying data proportions from each
LiDAR sequence, leading to similar data diversity across splits. To address
this, we propose a new benchmark that samples scene sequences for diverse
fine-tuning splits, ensuring adequate model convergence and providing a more
accurate evaluation of pre-training methods. Experiments on our Waymo benchmark
and the KITTI dataset demonstrate that MV-JAR consistently and significantly
improves 3D detection performance across various data scales, achieving up to a
6.3% increase in mAPH compared to training from scratch. Codes and the
benchmark will be available at https://github.com/SmartBot-PJLab/MV-JAR .
Related papers
- OPUS: Occupancy Prediction Using a Sparse Set [64.60854562502523]
We present a framework to simultaneously predict occupied locations and classes using a set of learnable queries.
OPUS incorporates a suite of non-trivial strategies to enhance model performance.
Our lightest model achieves superior RayIoU on the Occ3D-nuScenes dataset at near 2x FPS, while our heaviest model surpasses previous best results by 6.1 RayIoU.
arXiv Detail & Related papers (2024-09-14T07:44:22Z) - Multi-Space Alignments Towards Universal LiDAR Segmentation [50.992103482269016]
M3Net is a one-of-a-kind framework for fulfilling multi-task, multi-dataset, multi-modality LiDAR segmentation.
We first combine large-scale driving datasets acquired by different types of sensors from diverse scenes.
We then conduct alignments in three spaces, namely data, feature, and label spaces, during the training.
arXiv Detail & Related papers (2024-05-02T17:59:57Z) - LiDAR Data Synthesis with Denoising Diffusion Probabilistic Models [1.1965844936801797]
Generative modeling of 3D LiDAR data is an emerging task with promising applications for autonomous mobile robots.
We present R2DM, a novel generative model for LiDAR data that can generate diverse and high-fidelity 3D scene point clouds.
Our method is built upon denoising diffusion probabilistic models (DDPMs), which have shown impressive results among generative model frameworks.
arXiv Detail & Related papers (2023-09-17T12:26:57Z) - Diffusion-based 3D Object Detection with Random Boxes [58.43022365393569]
Existing anchor-based 3D detection methods rely on empiricals setting of anchors, which makes the algorithms lack elegance.
Our proposed Diff3Det migrates the diffusion model to proposal generation for 3D object detection by considering the detection boxes as generative targets.
In the inference stage, the model progressively refines a set of random boxes to the prediction results.
arXiv Detail & Related papers (2023-09-05T08:49:53Z) - Monocular 3D Object Detection with LiDAR Guided Semi Supervised Active
Learning [2.16117348324501]
We propose a novel semi-supervised active learning (SSAL) framework for monocular 3D object detection with LiDAR guidance (MonoLiG)
We utilize LiDAR to guide the data selection and training of monocular 3D detectors without introducing any overhead in the inference phase.
Our training strategy attains the top place in KITTI 3D and birds-eye-view (BEV) monocular object detection official benchmarks by improving the BEV Average Precision (AP) by 2.02.
arXiv Detail & Related papers (2023-07-17T11:55:27Z) - Uni3D: A Unified Baseline for Multi-dataset 3D Object Detection [34.2238222373818]
Current 3D object detection models follow a single dataset-specific training and testing paradigm.
In this paper, we study the task of training a unified 3D detector from multiple datasets.
We present a Uni3D which leverages a simple data-level correction operation and a designed semantic-level coupling-and-recoupling module.
arXiv Detail & Related papers (2023-03-13T05:54:13Z) - Boosting 3D Object Detection by Simulating Multimodality on Point Clouds [51.87740119160152]
This paper presents a new approach to boost a single-modality (LiDAR) 3D object detector by teaching it to simulate features and responses that follow a multi-modality (LiDAR-image) detector.
The approach needs LiDAR-image data only when training the single-modality detector, and once well-trained, it only needs LiDAR data at inference.
Experimental results on the nuScenes dataset show that our approach outperforms all SOTA LiDAR-only 3D detectors.
arXiv Detail & Related papers (2022-06-30T01:44:30Z) - Dense Voxel Fusion for 3D Object Detection [10.717415797194896]
Voxel Fusion (DVF) is a sequential fusion method that generates multi-scale dense voxel feature representations.
We train directly with ground truth 2D bounding box labels, avoiding noisy, detector-specific, 2D predictions.
We show that our proposed multi-modal training strategy results in better generalization compared to training using erroneous 2D predictions.
arXiv Detail & Related papers (2022-03-02T04:51:31Z) - SelfVoxeLO: Self-supervised LiDAR Odometry with Voxel-based Deep Neural
Networks [81.64530401885476]
We propose a self-supervised LiDAR odometry method, dubbed SelfVoxeLO, to tackle these two difficulties.
Specifically, we propose a 3D convolution network to process the raw LiDAR data directly, which extracts features that better encode the 3D geometric patterns.
We evaluate our method's performances on two large-scale datasets, i.e., KITTI and Apollo-SouthBay.
arXiv Detail & Related papers (2020-10-19T09:23:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.