PWCLO-Net: Deep LiDAR Odometry in 3D Point Clouds Using Hierarchical
Embedding Mask Optimization
- URL: http://arxiv.org/abs/2012.00972v2
- Date: Fri, 2 Apr 2021 05:20:05 GMT
- Title: PWCLO-Net: Deep LiDAR Odometry in 3D Point Clouds Using Hierarchical
Embedding Mask Optimization
- Authors: Guangming Wang, Xinrui Wu, Zhe Liu, Hesheng Wang
- Abstract summary: A novel 3D point cloud learning model for deep LiDAR odometry, named PWCLO-Net, is proposed in this paper.
In this model, the Pyramid, Warping, and Cost volume structure for the LiDAR odometry task is built to refine the estimated pose in a coarse-to-fine approach hierarchically.
Our method outperforms all recent learning-based methods and outperforms the geometry-based approach, LOAM with mapping optimization, on most sequences of KITTI odometry dataset.
- Score: 17.90299648470637
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A novel 3D point cloud learning model for deep LiDAR odometry, named
PWCLO-Net, using hierarchical embedding mask optimization is proposed in this
paper. In this model, the Pyramid, Warping, and Cost volume (PWC) structure for
the LiDAR odometry task is built to refine the estimated pose in a
coarse-to-fine approach hierarchically. An attentive cost volume is built to
associate two point clouds and obtain embedding motion patterns. Then, a novel
trainable embedding mask is proposed to weigh the local motion patterns of all
points to regress the overall pose and filter outlier points. The estimated
current pose is used to warp the first point cloud to bridge the distance to
the second point cloud, and then the cost volume of the residual motion is
built. At the same time, the embedding mask is optimized hierarchically from
coarse to fine to obtain more accurate filtering information for pose
refinement. The trainable pose warp-refinement process is iteratively used to
make the pose estimation more robust for outliers. The superior performance and
effectiveness of our LiDAR odometry model are demonstrated on KITTI odometry
dataset. Our method outperforms all recent learning-based methods and
outperforms the geometry-based approach, LOAM with mapping optimization, on
most sequences of KITTI odometry dataset.Our source codes will be released on
https://github.com/IRMVLab/PWCLONet.
Related papers
- MultiPull: Detailing Signed Distance Functions by Pulling Multi-Level Queries at Multi-Step [48.812388649469106]
We propose a novel method to learn multi-scale implicit fields from raw point clouds by optimizing accurate SDFs from coarse to fine.
Our experiments on widely used object and scene benchmarks demonstrate that our method outperforms the state-of-the-art methods in surface reconstruction.
arXiv Detail & Related papers (2024-11-02T10:50:22Z) - DSLO: Deep Sequence LiDAR Odometry Based on Inconsistent Spatio-temporal Propagation [66.8732965660931]
paper introduces a 3D point cloud sequence learning model based on inconsistent-temporal propagation for LiDAR odometry DSLO.
It consists of a pyramid structure with a sequential pose module, a hierarchical pose refinement module, and a temporal feature propagation module.
arXiv Detail & Related papers (2024-09-01T15:12:48Z) - StarNet: Style-Aware 3D Point Cloud Generation [82.30389817015877]
StarNet is able to reconstruct and generate high-fidelity and even 3D point clouds using a mapping network.
Our framework achieves comparable state-of-the-art performance on various metrics in the point cloud reconstruction and generation tasks.
arXiv Detail & Related papers (2023-03-28T08:21:44Z) - GPCO: An Unsupervised Green Point Cloud Odometry Method [64.86292006892093]
A lightweight point cloud odometry solution is proposed and named the green point cloud odometry (GPCO) method.
GPCO is an unsupervised learning method that predicts object motion by matching features of consecutive point cloud scans.
It is observed that GPCO outperforms benchmarking deep learning methods in accuracy while it has a significantly smaller model size and less training time.
arXiv Detail & Related papers (2021-12-08T00:24:03Z) - Efficient 3D Deep LiDAR Odometry [16.388259779644553]
An efficient 3D point cloud learning architecture, named PWCLO-Net, is first proposed in this paper.
The entire architecture is holistically optimized end-to-end to achieve adaptive learning of cost volume and mask.
arXiv Detail & Related papers (2021-11-03T11:09:49Z) - Lifting 2D Object Locations to 3D by Discounting LiDAR Outliers across
Objects and Views [70.1586005070678]
We present a system for automatically converting 2D mask object predictions and raw LiDAR point clouds into full 3D bounding boxes of objects.
Our method significantly outperforms previous work despite the fact that those methods use significantly more complex pipelines, 3D models and additional human-annotated external sources of prior information.
arXiv Detail & Related papers (2021-09-16T13:01:13Z) - Soft Expectation and Deep Maximization for Image Feature Detection [68.8204255655161]
We propose SEDM, an iterative semi-supervised learning process that flips the question and first looks for repeatable 3D points, then trains a detector to localize them in image space.
Our results show that this new model trained using SEDM is able to better localize the underlying 3D points in a scene.
arXiv Detail & Related papers (2021-04-21T00:35:32Z) - Point Cloud based Hierarchical Deep Odometry Estimation [3.058685580689605]
We propose a deep model that learns to estimate odometry in driving scenarios using point cloud data.
The proposed model consumes raw point clouds in order to extract frame-to-frame odometry estimation.
arXiv Detail & Related papers (2021-03-05T00:17:58Z) - Scan-based Semantic Segmentation of LiDAR Point Clouds: An Experimental
Study [2.6205925938720833]
State of the art methods use deep neural networks to predict semantic classes for each point in a LiDAR scan.
A powerful and efficient way to process LiDAR measurements is to use two-dimensional, image-like projections.
We demonstrate various techniques to boost the performance and to improve runtime as well as memory constraints.
arXiv Detail & Related papers (2020-04-06T11:08:12Z) - Towards Better Generalization: Joint Depth-Pose Learning without PoseNet [36.414471128890284]
We tackle the essential problem of scale inconsistency for self-supervised joint depth-pose learning.
Most existing methods assume that a consistent scale of depth and pose can be learned across all input samples.
We propose a novel system that explicitly disentangles scale from the network estimation.
arXiv Detail & Related papers (2020-04-03T00:28:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.