Density-invariant Features for Distant Point Cloud Registration
- URL: http://arxiv.org/abs/2307.09788v2
- Date: Tue, 8 Aug 2023 11:36:26 GMT
- Title: Density-invariant Features for Distant Point Cloud Registration
- Authors: Quan Liu, Hongzi Zhu, Yunsong Zhou, Hongyang Li, Shan Chang, Minyi Guo
- Abstract summary: Group-wise Contrastive Learning (GCL) scheme to extract density-invariant geometric features.
We propose a simple yet effective training scheme to force the feature of multiple point clouds in the same spatial location to be similar.
The resulting fully-convolutional feature extractor is more powerful and density-invariant than state-of-the-art methods.
- Score: 29.68594463362292
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Registration of distant outdoor LiDAR point clouds is crucial to extending
the 3D vision of collaborative autonomous vehicles, and yet is challenging due
to small overlapping area and a huge disparity between observed point
densities. In this paper, we propose Group-wise Contrastive Learning (GCL)
scheme to extract density-invariant geometric features to register distant
outdoor LiDAR point clouds. We mark through theoretical analysis and
experiments that, contrastive positives should be independent and identically
distributed (i.i.d.), in order to train densityinvariant feature extractors. We
propose upon the conclusion a simple yet effective training scheme to force the
feature of multiple point clouds in the same spatial location (referred to as
positive groups) to be similar, which naturally avoids the sampling bias
introduced by a pair of point clouds to conform with the i.i.d. principle. The
resulting fully-convolutional feature extractor is more powerful and
density-invariant than state-of-the-art methods, improving the registration
recall of distant scenarios on KITTI and nuScenes benchmarks by 40.9% and
26.9%, respectively. Code is available at https://github.com/liuQuan98/GCL.
Related papers
- BiEquiFormer: Bi-Equivariant Representations for Global Point Cloud Registration [28.75341781515012]
The goal of this paper is to address the problem of global point cloud registration (PCR) i.e., finding the optimal alignment between point clouds.
We show that state-of-the-art deep learning methods suffer from huge performance degradation when the point clouds are arbitrarily placed in space.
arXiv Detail & Related papers (2024-07-11T17:58:10Z) - Towards the Uncharted: Density-Descending Feature Perturbation for Semi-supervised Semantic Segmentation [51.66997548477913]
We propose a novel feature-level consistency learning framework named Density-Descending Feature Perturbation (DDFP)
Inspired by the low-density separation assumption in semi-supervised learning, our key insight is that feature density can shed a light on the most promising direction for the segmentation classifier to explore.
The proposed DDFP outperforms other designs on feature-level perturbations and shows state of the art performances on both Pascal VOC and Cityscapes dataset.
arXiv Detail & Related papers (2024-03-11T06:59:05Z) - Point Cloud Classification via Deep Set Linearized Optimal Transport [51.99765487172328]
We introduce Deep Set Linearized Optimal Transport, an algorithm designed for the efficient simultaneous embedding of point clouds into an $L2-$space.
This embedding preserves specific low-dimensional structures within the Wasserstein space while constructing a classifier to distinguish between various classes of point clouds.
We showcase the advantages of our algorithm over the standard deep set approach through experiments on a flow dataset with a limited number of labeled point clouds.
arXiv Detail & Related papers (2024-01-02T23:26:33Z) - PCB-RandNet: Rethinking Random Sampling for LIDAR Semantic Segmentation
in Autonomous Driving Scene [15.516687293651795]
We propose a new Polar Cylinder Balanced Random Sampling method for semantic segmentation of large-scale LiDAR point clouds.
In addition, a sampling consistency loss is introduced to further improve the segmentation performance and reduce the model's variance under different sampling methods.
Our approach produces excellent performance on both SemanticKITTI and SemanticPOSS benchmarks, achieving a 2.8% and 4.0% improvement, respectively.
arXiv Detail & Related papers (2022-09-28T02:59:36Z) - Learning to Register Unbalanced Point Pairs [10.369750912567714]
Recent 3D registration methods can effectively handle large-scale or partially overlapping point pairs.
We present a novel 3D registration method, called UPPNet, for the unbalanced point pairs.
arXiv Detail & Related papers (2022-07-09T08:03:59Z) - Self-Supervised Arbitrary-Scale Point Clouds Upsampling via Implicit
Neural Representation [79.60988242843437]
We propose a novel approach that achieves self-supervised and magnification-flexible point clouds upsampling simultaneously.
Experimental results demonstrate that our self-supervised learning based scheme achieves competitive or even better performance than supervised learning based state-of-the-art methods.
arXiv Detail & Related papers (2022-04-18T07:18:25Z) - PRIN/SPRIN: On Extracting Point-wise Rotation Invariant Features [91.2054994193218]
We propose a point-set learning framework PRIN, focusing on rotation invariant feature extraction in point clouds analysis.
In addition, we extend PRIN to a sparse version called SPRIN, which directly operates on sparse point clouds.
Results show that, on the dataset with randomly rotated point clouds, SPRIN demonstrates better performance than state-of-the-art methods without any data augmentation.
arXiv Detail & Related papers (2021-02-24T06:44:09Z) - SPU-Net: Self-Supervised Point Cloud Upsampling by Coarse-to-Fine
Reconstruction with Self-Projection Optimization [52.20602782690776]
It is expensive and tedious to obtain large scale paired sparse-canned point sets for training from real scanned sparse data.
We propose a self-supervised point cloud upsampling network, named SPU-Net, to capture the inherent upsampling patterns of points lying on the underlying object surface.
We conduct various experiments on both synthetic and real-scanned datasets, and the results demonstrate that we achieve comparable performance to the state-of-the-art supervised methods.
arXiv Detail & Related papers (2020-12-08T14:14:09Z) - Reconfigurable Voxels: A New Representation for LiDAR-Based Point Clouds [76.52448276587707]
We propose Reconfigurable Voxels, a new approach to constructing representations from 3D point clouds.
Specifically, we devise a biased random walk scheme, which adaptively covers each neighborhood with a fixed number of voxels.
We find that this approach effectively improves the stability of voxel features, especially for sparse regions.
arXiv Detail & Related papers (2020-04-06T15:07:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.