Mapping of Sparse 3D Data using Alternating Projection
- URL: http://arxiv.org/abs/2010.02516v2
- Date: Fri, 9 Oct 2020 18:22:45 GMT
- Title: Mapping of Sparse 3D Data using Alternating Projection
- Authors: Siddhant Ranade, Xin Yu, Shantnu Kakkar, Pedro Miraldo, Srikumar
Ramalingam
- Abstract summary: We propose a novel technique to register sparse 3D scans in the absence of texture.
Existing methods such as KinectFusion heavily rely on dense point clouds.
We propose the use of a two-step alternating projection algorithm by formulating the registration as the simultaneous satisfaction of intersection and rigidity constraints.
- Score: 35.735398244213584
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a novel technique to register sparse 3D scans in the absence of
texture. While existing methods such as KinectFusion or Iterative Closest
Points (ICP) heavily rely on dense point clouds, this task is particularly
challenging under sparse conditions without RGB data. Sparse texture-less data
does not come with high-quality boundary signal, and this prohibits the use of
correspondences from corners, junctions, or boundary lines. Moreover, in the
case of sparse data, it is incorrect to assume that the same point will be
captured in two consecutive scans. We take a different approach and first
re-parameterize the point-cloud using a large number of line segments. In this
re-parameterized data, there exists a large number of line intersection (and
not correspondence) constraints that allow us to solve the registration task.
We propose the use of a two-step alternating projection algorithm by
formulating the registration as the simultaneous satisfaction of intersection
and rigidity constraints. The proposed approach outperforms other top-scoring
algorithms on both Kinect and LiDAR datasets. In Kinect, we can use 100X
downsampled sparse data and still outperform competing methods operating on
full-resolution data.
Related papers
- Inferring Neural Signed Distance Functions by Overfitting on Single Noisy Point Clouds through Finetuning Data-Driven based Priors [53.6277160912059]
We propose a method to promote pros of data-driven based and overfitting-based methods for better generalization, faster inference, and higher accuracy in learning neural SDFs.
We introduce a novel statistical reasoning algorithm in local regions which is able to finetune data-driven based priors without signed distance supervision, clean point cloud, or point normals.
arXiv Detail & Related papers (2024-10-25T16:48:44Z) - DELFlow: Dense Efficient Learning of Scene Flow for Large-Scale Point
Clouds [42.64433313672884]
We regularize raw points to a dense format by storing 3D coordinates in 2D grids.
Unlike the sampling operation commonly used in existing works, the dense 2D representation preserves most points.
We also present a novel warping projection technique to alleviate the information loss problem.
arXiv Detail & Related papers (2023-08-08T16:37:24Z) - Quadric Representations for LiDAR Odometry, Mapping and Localization [93.24140840537912]
Current LiDAR odometry, mapping and localization methods leverage point-wise representations of 3D scenes.
We propose a novel method of describing scenes using quadric surfaces, which are far more compact representations of 3D objects.
Our method maintains low latency and memory utility while achieving competitive, and even superior, accuracy.
arXiv Detail & Related papers (2023-04-27T13:52:01Z) - Graph R-CNN: Towards Accurate 3D Object Detection with
Semantic-Decorated Local Graph [26.226885108862735]
Two-stage detectors have gained much popularity in 3D object detection.
Most two-stage 3D detectors utilize grid points, voxel grids, or sampled keypoints for RoI feature extraction in the second stage.
This paper solves this problem in three aspects.
arXiv Detail & Related papers (2022-08-07T02:56:56Z) - Learning to Register Unbalanced Point Pairs [10.369750912567714]
Recent 3D registration methods can effectively handle large-scale or partially overlapping point pairs.
We present a novel 3D registration method, called UPPNet, for the unbalanced point pairs.
arXiv Detail & Related papers (2022-07-09T08:03:59Z) - POCO: Point Convolution for Surface Reconstruction [92.22371813519003]
Implicit neural networks have been successfully used for surface reconstruction from point clouds.
Many of them face scalability issues as they encode the isosurface function of a whole object or scene into a single latent vector.
We propose to use point cloud convolutions and compute latent vectors at each input point.
arXiv Detail & Related papers (2022-01-05T21:26:18Z) - Towards Fine-grained 3D Face Dense Registration: An Optimal Dividing and
Diffusing Method [17.38748022631488]
Dense-to-vertex correspondence between 3D faces is a fundamental and challenging issue for 3D&2D face analysis.
In this paper, we revisit dense registration by a dimension-degraded problem, i.e. proportional segmentation of a line.
We employ an iterative dividing and diffusing method to reach the final solution uniquely.
arXiv Detail & Related papers (2021-09-23T08:31:35Z) - Learning Semantic Segmentation of Large-Scale Point Clouds with Random
Sampling [52.464516118826765]
We introduce RandLA-Net, an efficient and lightweight neural architecture to infer per-point semantics for large-scale point clouds.
The key to our approach is to use random point sampling instead of more complex point selection approaches.
Our RandLA-Net can process 1 million points in a single pass up to 200x faster than existing approaches.
arXiv Detail & Related papers (2021-07-06T05:08:34Z) - DeepI2P: Image-to-Point Cloud Registration via Deep Classification [71.3121124994105]
DeepI2P is a novel approach for cross-modality registration between an image and a point cloud.
Our method estimates the relative rigid transformation between the coordinate frames of the camera and Lidar.
We circumvent the difficulty by converting the registration problem into a classification and inverse camera projection optimization problem.
arXiv Detail & Related papers (2021-04-08T04:27:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.