ECTLO: Effective Continuous-time Odometry Using Range Image for LiDAR
with Small FoV
- URL: http://arxiv.org/abs/2206.08517v2
- Date: Thu, 19 Oct 2023 03:25:50 GMT
- Title: ECTLO: Effective Continuous-time Odometry Using Range Image for LiDAR
with Small FoV
- Authors: Xin Zheng, Jianke Zhu
- Abstract summary: We present an effective continuous-time LiDAR odometry (ECTLO) method for the Risley-prism-based LiDARs with non-repetitive scanning patterns.
A single range image covering historical points in LiDAR's small FoV is adopted for efficient map representation.
Experiments have been conducted on various testbeds using the prism-based LiDARs with different scanning patterns.
- Score: 20.452961476175812
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Prism-based LiDARs are more compact and cheaper than the conventional
mechanical multi-line spinning LiDARs, which have become increasingly popular
in robotics, recently. However, there are several challenges for these new
LiDAR sensors, including small field of view, severe motion distortions, and
irregular patterns, which hinder them from being widely used in LiDAR odometry,
practically. To tackle these problems, we present an effective continuous-time
LiDAR odometry (ECTLO) method for the Risley-prism-based LiDARs with
non-repetitive scanning patterns. A single range image covering historical
points in LiDAR's small FoV is adopted for efficient map representation. To
account for the noisy data from occlusions after map updating, a filter-based
point-to-plane Gaussian Mixture Model is used for robust registration.
Moreover, a LiDAR-only continuous-time motion model is employed to relieve the
inevitable distortions. Extensive experiments have been conducted on various
testbeds using the prism-based LiDARs with different scanning patterns, whose
promising results demonstrate the efficacy of our proposed approach.
Related papers
- UltraLiDAR: Learning Compact Representations for LiDAR Completion and
Generation [51.443788294845845]
We present UltraLiDAR, a data-driven framework for scene-level LiDAR completion, LiDAR generation, and LiDAR manipulation.
We show that by aligning the representation of a sparse point cloud to that of a dense point cloud, we can densify the sparse point clouds.
By learning a prior over the discrete codebook, we can generate diverse, realistic LiDAR point clouds for self-driving.
arXiv Detail & Related papers (2023-11-02T17:57:03Z) - Traj-LO: In Defense of LiDAR-Only Odometry Using an Effective
Continuous-Time Trajectory [20.452961476175812]
This letter explores the capability of LiDAR-only odometry through a continuous-time perspective.
Our proposed Traj-LO approach tries to recover the spatial-temporal consistent movement of LiDAR.
Our implementation is open-sourced on GitHub.
arXiv Detail & Related papers (2023-09-25T03:05:06Z) - Detecting the Anomalies in LiDAR Pointcloud [8.827947115933942]
Adverse weather conditions may cause the LiDAR to produce pointcloud with abnormal patterns such as scattered noise points and uncommon intensity values.
We propose a novel approach to detect whether a LiDAR is generating anomalous pointcloud by analyzing the pointcloud characteristics.
arXiv Detail & Related papers (2023-07-31T22:53:42Z) - LiDAR-NeRF: Novel LiDAR View Synthesis via Neural Radiance Fields [112.62936571539232]
We introduce a new task, novel view synthesis for LiDAR sensors.
Traditional model-based LiDAR simulators with style-transfer neural networks can be applied to render novel views.
We use a neural radiance field (NeRF) to facilitate the joint learning of geometry and the attributes of 3D points.
arXiv Detail & Related papers (2023-04-20T15:44:37Z) - Boosting 3D Object Detection by Simulating Multimodality on Point Clouds [51.87740119160152]
This paper presents a new approach to boost a single-modality (LiDAR) 3D object detector by teaching it to simulate features and responses that follow a multi-modality (LiDAR-image) detector.
The approach needs LiDAR-image data only when training the single-modality detector, and once well-trained, it only needs LiDAR data at inference.
Experimental results on the nuScenes dataset show that our approach outperforms all SOTA LiDAR-only 3D detectors.
arXiv Detail & Related papers (2022-06-30T01:44:30Z) - Stress-Testing LiDAR Registration [52.24383388306149]
We propose a method for selecting balanced registration sets, which are challenging sets of frame-pairs from LiDAR datasets.
Perhaps unexpectedly, we find that the fastest and simultaneously most accurate approach is a version of advanced RANSAC.
arXiv Detail & Related papers (2022-04-16T05:10:55Z) - LiDAR Distillation: Bridging the Beam-Induced Domain Gap for 3D Object
Detection [96.63947479020631]
In many real-world applications, the LiDAR points used by mass-produced robots and vehicles usually have fewer beams than that in large-scale public datasets.
We propose the LiDAR Distillation to bridge the domain gap induced by different LiDAR beams for 3D object detection.
arXiv Detail & Related papers (2022-03-28T17:59:02Z) - LiDARCap: Long-range Marker-less 3D Human Motion Capture with LiDAR
Point Clouds [58.402752909624716]
Existing motion capture datasets are largely short-range and cannot yet fit the need of long-range applications.
We propose LiDARHuman26M, a new human motion capture dataset captured by LiDAR at a much longer range to overcome this limitation.
Our dataset also includes the ground truth human motions acquired by the IMU system and the synchronous RGB images.
arXiv Detail & Related papers (2022-03-28T12:52:45Z) - Efficient LiDAR Odometry for Autonomous Driving [16.22522474028277]
LiDAR odometry plays an important role in self-localization and mapping for autonomous navigation.
Recent spherical range image-based method enjoys the merits of fast nearest neighbor search by spherical mapping.
We propose a novel efficient LiDAR odometry approach by taking advantage of both non-ground spherical range image and bird's-eye-view map for ground points.
arXiv Detail & Related papers (2021-04-22T06:05:09Z) - Cirrus: A Long-range Bi-pattern LiDAR Dataset [35.87501129332217]
We introduce Cirrus, a new long-range bi-pattern LiDAR public dataset for autonomous driving tasks.
Our platform is equipped with a high-resolution video camera and a pair of LiDAR sensors with a 250-meter effective range.
In Cirrus, eight categories of objects are exhaustively annotated in the LiDAR point clouds for the entire effective range.
arXiv Detail & Related papers (2020-12-05T03:18:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.