Learning Moving-Object Tracking with FMCW LiDAR
- URL: http://arxiv.org/abs/2203.00959v1
- Date: Wed, 2 Mar 2022 09:11:36 GMT
- Title: Learning Moving-Object Tracking with FMCW LiDAR
- Authors: Yi Gu, Hongzhi Cheng, Kafeng Wang, Dejing Dou, Chengzhong Xu and Hui
Kong
- Abstract summary: We propose a learning-based moving-object tracking method utilizing our newly developed LiDAR sensor, Frequency Modulated Continuous Wave (FMCW) LiDAR.
Given the labels, we propose a contrastive learning framework, which pulls together the features from the same instance in embedding space and pushes apart the features from different instances to improve the tracking quality.
- Score: 53.05551269151209
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we propose a learning-based moving-object tracking method
utilizing our newly developed LiDAR sensor, Frequency Modulated Continuous Wave
(FMCW) LiDAR. Compared with most existing commercial LiDAR sensors, our FMCW
LiDAR can provide additional Doppler velocity information to each 3D point of
the point clouds. Benefiting from this, we can generate instance labels as
ground truth in a semi-automatic manner. Given the labels, we propose a
contrastive learning framework, which pulls together the features from the same
instance in embedding space and pushes apart the features from different
instances, to improve the tracking quality. Extensive experiments are conducted
on our recorded driving data, and the results show that our method outperforms
the baseline methods by a large margin.
Related papers
- Multi-Modal Data-Efficient 3D Scene Understanding for Autonomous Driving [58.16024314532443]
We introduce LaserMix++, a framework that integrates laser beam manipulations from disparate LiDAR scans and incorporates LiDAR-camera correspondences to assist data-efficient learning.
Results demonstrate that LaserMix++ outperforms fully supervised alternatives, achieving comparable accuracy with five times fewer annotations.
This substantial advancement underscores the potential of semi-supervised approaches in reducing the reliance on extensive labeled data in LiDAR-based 3D scene understanding systems.
arXiv Detail & Related papers (2024-05-08T17:59:53Z) - Just Add $100 More: Augmenting NeRF-based Pseudo-LiDAR Point Cloud for Resolving Class-imbalance Problem [12.26293873825084]
We propose to leverage pseudo-LiDAR point clouds generated from videos capturing a surround view of miniatures or real-world objects of minor classes.
Our method, called Pseudo Ground Truth Augmentation (PGT-Aug), consists of three main steps: (i) volumetric 3D instance reconstruction using a 2D-to-3D view synthesis model, (ii) object-level domain alignment with LiDAR intensity estimation, and (iii) a hybrid context-aware placement method from ground and map information.
arXiv Detail & Related papers (2024-03-18T08:50:04Z) - Unsupervised Domain Adaptation for Self-Driving from Past Traversal
Features [69.47588461101925]
We propose a method to adapt 3D object detectors to new driving environments.
Our approach enhances LiDAR-based detection models using spatial quantized historical features.
Experiments on real-world datasets demonstrate significant improvements.
arXiv Detail & Related papers (2023-09-21T15:00:31Z) - Monocular 3D Object Detection with LiDAR Guided Semi Supervised Active
Learning [2.16117348324501]
We propose a novel semi-supervised active learning (SSAL) framework for monocular 3D object detection with LiDAR guidance (MonoLiG)
We utilize LiDAR to guide the data selection and training of monocular 3D detectors without introducing any overhead in the inference phase.
Our training strategy attains the top place in KITTI 3D and birds-eye-view (BEV) monocular object detection official benchmarks by improving the BEV Average Precision (AP) by 2.02.
arXiv Detail & Related papers (2023-07-17T11:55:27Z) - Efficient Spatial-Temporal Information Fusion for LiDAR-Based 3D Moving
Object Segmentation [23.666607237164186]
We propose a novel deep neural network exploiting both spatial-temporal information and different representation modalities of LiDAR scans to improve LiDAR-MOS performance.
Specifically, we first use a range image-based dual-branch structure to separately deal with spatial and temporal information.
We also use a point refinement module via 3D sparse convolution to fuse the information from both LiDAR range image and point cloud representations.
arXiv Detail & Related papers (2022-07-05T17:59:17Z) - Boosting 3D Object Detection by Simulating Multimodality on Point Clouds [51.87740119160152]
This paper presents a new approach to boost a single-modality (LiDAR) 3D object detector by teaching it to simulate features and responses that follow a multi-modality (LiDAR-image) detector.
The approach needs LiDAR-image data only when training the single-modality detector, and once well-trained, it only needs LiDAR data at inference.
Experimental results on the nuScenes dataset show that our approach outperforms all SOTA LiDAR-only 3D detectors.
arXiv Detail & Related papers (2022-06-30T01:44:30Z) - LiDAR Distillation: Bridging the Beam-Induced Domain Gap for 3D Object
Detection [96.63947479020631]
In many real-world applications, the LiDAR points used by mass-produced robots and vehicles usually have fewer beams than that in large-scale public datasets.
We propose the LiDAR Distillation to bridge the domain gap induced by different LiDAR beams for 3D object detection.
arXiv Detail & Related papers (2022-03-28T17:59:02Z) - LiDARCap: Long-range Marker-less 3D Human Motion Capture with LiDAR
Point Clouds [58.402752909624716]
Existing motion capture datasets are largely short-range and cannot yet fit the need of long-range applications.
We propose LiDARHuman26M, a new human motion capture dataset captured by LiDAR at a much longer range to overcome this limitation.
Our dataset also includes the ground truth human motions acquired by the IMU system and the synchronous RGB images.
arXiv Detail & Related papers (2022-03-28T12:52:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.