Quadric Representations for LiDAR Odometry, Mapping and Localization
- URL: http://arxiv.org/abs/2304.14190v1
- Date: Thu, 27 Apr 2023 13:52:01 GMT
- Title: Quadric Representations for LiDAR Odometry, Mapping and Localization
- Authors: Chao Xia, Chenfeng Xu, Patrick Rim, Mingyu Ding, Nanning Zheng, Kurt
Keutzer, Masayoshi Tomizuka, Wei Zhan
- Abstract summary: Current LiDAR odometry, mapping and localization methods leverage point-wise representations of 3D scenes.
We propose a novel method of describing scenes using quadric surfaces, which are far more compact representations of 3D objects.
Our method maintains low latency and memory utility while achieving competitive, and even superior, accuracy.
- Score: 93.24140840537912
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Current LiDAR odometry, mapping and localization methods leverage point-wise
representations of 3D scenes and achieve high accuracy in autonomous driving
tasks. However, the space-inefficiency of methods that use point-wise
representations limits their development and usage in practical applications.
In particular, scan-submap matching and global map representation methods are
restricted by the inefficiency of nearest neighbor searching (NNS) for
large-volume point clouds. To improve space-time efficiency, we propose a novel
method of describing scenes using quadric surfaces, which are far more compact
representations of 3D objects than conventional point clouds. In contrast to
point cloud-based methods, our quadric representation-based method decomposes a
3D scene into a collection of sparse quadric patches, which improves storage
efficiency and avoids the slow point-wise NNS process. Our method first
segments a given point cloud into patches and fits each of them to a quadric
implicit function. Each function is then coupled with other geometric
descriptors of the patch, such as its center position and covariance matrix.
Collectively, these patch representations fully describe a 3D scene, which can
be used in place of the original point cloud and employed in LiDAR odometry,
mapping and localization algorithms. We further design a novel incremental
growing method for quadric representations, which eliminates the need to
repeatedly re-fit quadric surfaces from the original point cloud. Extensive
odometry, mapping and localization experiments on large-volume point clouds in
the KITTI and UrbanLoco datasets demonstrate that our method maintains low
latency and memory utility while achieving competitive, and even superior,
accuracy.
Related papers
- 3D Reconstruction with Fast Dipole Sums [12.865206085308728]
We introduce a method for high-quality 3D reconstruction from multiview images.
We represent implicit geometry and radiance fields as per-point attributes of a dense point cloud.
These queries facilitate the use of ray tracing to efficiently and differentiably render images.
arXiv Detail & Related papers (2024-05-27T03:23:25Z) - iPUNet:Iterative Cross Field Guided Point Cloud Upsampling [20.925921503694894]
Point clouds acquired by 3D scanning devices are often sparse, noisy, and non-uniform, causing a loss of geometric features.
We present a learning-based point upsampling method, iPUNet, which generates dense and uniform points at arbitrary ratios.
We demonstrate that iPUNet is robust to handle noisy and non-uniformly distributed inputs, and outperforms state-of-the-art point cloud upsampling methods.
arXiv Detail & Related papers (2023-10-13T13:24:37Z) - PointOcc: Cylindrical Tri-Perspective View for Point-based 3D Semantic
Occupancy Prediction [72.75478398447396]
We propose a cylindrical tri-perspective view to represent point clouds effectively and comprehensively.
Considering the distance distribution of LiDAR point clouds, we construct the tri-perspective view in the cylindrical coordinate system.
We employ spatial group pooling to maintain structural details during projection and adopt 2D backbones to efficiently process each TPV plane.
arXiv Detail & Related papers (2023-08-31T17:57:17Z) - DELFlow: Dense Efficient Learning of Scene Flow for Large-Scale Point
Clouds [42.64433313672884]
We regularize raw points to a dense format by storing 3D coordinates in 2D grids.
Unlike the sampling operation commonly used in existing works, the dense 2D representation preserves most points.
We also present a novel warping projection technique to alleviate the information loss problem.
arXiv Detail & Related papers (2023-08-08T16:37:24Z) - Unleash the Potential of 3D Point Cloud Modeling with A Calibrated Local
Geometry-driven Distance Metric [62.365983810610985]
We propose a novel distance metric called Calibrated Local Geometry Distance (CLGD)
CLGD computes the difference between the underlying 3D surfaces calibrated and induced by a set of reference points.
As a generic metric, CLGD has the potential to advance 3D point cloud modeling.
arXiv Detail & Related papers (2023-06-01T11:16:20Z) - Flattening-Net: Deep Regular 2D Representation for 3D Point Cloud
Analysis [66.49788145564004]
We present an unsupervised deep neural architecture called Flattening-Net to represent irregular 3D point clouds of arbitrary geometry and topology.
Our methods perform favorably against the current state-of-the-art competitors.
arXiv Detail & Related papers (2022-12-17T15:05:25Z) - Revisiting Point Cloud Simplification: A Learnable Feature Preserving
Approach [57.67932970472768]
Mesh and Point Cloud simplification methods aim to reduce the complexity of 3D models while retaining visual quality and relevant salient features.
We propose a fast point cloud simplification method by learning to sample salient points.
The proposed method relies on a graph neural network architecture trained to select an arbitrary, user-defined, number of points from the input space and to re-arrange their positions so as to minimize the visual perception error.
arXiv Detail & Related papers (2021-09-30T10:23:55Z) - SCTN: Sparse Convolution-Transformer Network for Scene Flow Estimation [71.2856098776959]
Estimating 3D motions for point clouds is challenging, since a point cloud is unordered and its density is significantly non-uniform.
We propose a novel architecture named Sparse Convolution-Transformer Network (SCTN) that equips the sparse convolution with the transformer.
We show that the learned relation-based contextual information is rich and helpful for matching corresponding points, benefiting scene flow estimation.
arXiv Detail & Related papers (2021-05-10T15:16:14Z) - Robust Kernel-based Feature Representation for 3D Point Cloud Analysis
via Circular Graph Convolutional Network [2.42919716430661]
We present a new local feature description method that is robust to rotation, density, and scale variations.
To improve representations of the local descriptors, we propose a global aggregation method.
Our method shows superior performances when compared to the state-of-the-art methods.
arXiv Detail & Related papers (2020-12-22T18:02:57Z) - LOL: Lidar-Only Odometry and Localization in 3D Point Cloud Maps [0.6091702876917281]
We deal with the problem of odometry and localization for Lidar-equipped vehicles driving in urban environments.
We apply a place recognition method to detect geometrically similar locations between the online 3D point cloud and the a priori offline map.
We demonstrate the utility of the proposed LOL system on several Kitti datasets of different lengths and environments.
arXiv Detail & Related papers (2020-07-03T10:20:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.