Lepard: Learning partial point cloud matching in rigid and deformable
scenes
- URL: http://arxiv.org/abs/2111.12591v1
- Date: Wed, 24 Nov 2021 16:09:29 GMT
- Title: Lepard: Learning partial point cloud matching in rigid and deformable
scenes
- Authors: Yang Li and Tatsuya Harada
- Abstract summary: Lepard is a Learning based approach for partial point cloud matching for rigid and deformable scenes.
For rigid point cloud matching, Lepard sets a new state-of-the-art on the 3DMatch / 3DLoMatch benchmarks with 93.6% / 69.0% registration recall.
In deformable cases, Lepard achieves +27.1% / +34.8% higher non-rigid feature matching recall than the prior art on our newly constructed 4DMatch / 4DLoMatch benchmark.
- Score: 73.45277809052928
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present Lepard, a Learning based approach for partial point cloud matching
for rigid and deformable scenes. The key characteristic of Lepard is the
following approaches that exploit 3D positional knowledge for point cloud
matching: 1) An architecture that disentangles point cloud representation into
feature space and 3D position space. 2) A position encoding method that
explicitly reveals 3D relative distance information through the dot product of
vectors. 3) A repositioning technique that modifies the cross-point-cloud
relative positions. Ablation studies demonstrate the effectiveness of the above
techniques. For rigid point cloud matching, Lepard sets a new state-of-the-art
on the 3DMatch / 3DLoMatch benchmarks with 93.6% / 69.0% registration recall.
In deformable cases, Lepard achieves +27.1% / +34.8% higher non-rigid feature
matching recall than the prior art on our newly constructed 4DMatch / 4DLoMatch
benchmark.
Related papers
- Zero-Shot Point Cloud Registration [94.39796531154303]
ZeroReg is the first zero-shot point cloud registration approach that eliminates the need for training on point cloud datasets.
The cornerstone of ZeroReg is the novel transfer of image features from keypoints to the point cloud, enriched by aggregating information from 3D geometric neighborhoods.
On benchmarks such as 3DMatch, 3DLoMatch, and ScanNet, ZeroReg achieves impressive Recall Ratios (RR) of over 84%, 46%, and 75%, respectively.
arXiv Detail & Related papers (2023-12-05T11:33:16Z) - Clustering based Point Cloud Representation Learning for 3D Analysis [80.88995099442374]
We propose a clustering based supervised learning scheme for point cloud analysis.
Unlike current de-facto, scene-wise training paradigm, our algorithm conducts within-class clustering on the point embedding space.
Our algorithm shows notable improvements on famous point cloud segmentation datasets.
arXiv Detail & Related papers (2023-07-27T03:42:12Z) - Learning Signed Distance Functions from Noisy 3D Point Clouds via Noise
to Noise Mapping [52.25114448281418]
Learning signed distance functions (SDFs) from 3D point clouds is an important task in 3D computer vision.
We propose to learn SDFs via a noise to noise mapping, which does not require any clean point cloud or ground truth supervision for training.
Our novelty lies in the noise to noise mapping which can infer a highly accurate SDF of a single object or scene from its multiple or even single noisy point cloud observations.
arXiv Detail & Related papers (2023-06-02T09:52:04Z) - LFM-3D: Learnable Feature Matching Across Wide Baselines Using 3D
Signals [9.201550006194994]
Learnable matchers often underperform when there exists only small regions of co-visibility between image pairs.
We propose LFM-3D, a Learnable Feature Matching framework that uses models based on graph neural networks.
We show that the resulting improved correspondences lead to much higher relative posing accuracy for in-the-wild image pairs.
arXiv Detail & Related papers (2023-03-22T17:46:27Z) - ImLoveNet: Misaligned Image-supported Registration Network for
Low-overlap Point Cloud Pairs [14.377604289952188]
Low-overlap regions between paired point clouds make the captured features very low-confidence.
We propose a misaligned image supported registration network for low-overlap point cloud pairs, dubbed ImLoveNet.
arXiv Detail & Related papers (2022-07-02T13:17:34Z) - A Representation Separation Perspective to Correspondences-free
Unsupervised 3D Point Cloud Registration [40.12490804387776]
3D point cloud registration in remote sensing field has been greatly advanced by deep learning based methods.
We propose a correspondences-free unsupervised point cloud registration (UPCR) method from the representation separation perspective.
Our method not only filters out the disturbance in pose-invariant representation but also is robust to partial-to-partial point clouds or noise.
arXiv Detail & Related papers (2022-03-24T17:50:19Z) - Shape-invariant 3D Adversarial Point Clouds [111.72163188681807]
Adversary and invisibility are two fundamental but conflict characters of adversarial perturbations.
Previous adversarial attacks on 3D point cloud recognition have often been criticized for their noticeable point outliers.
We propose a novel Point-Cloud Sensitivity Map to boost both the efficiency and imperceptibility of point perturbations.
arXiv Detail & Related papers (2022-03-08T12:21:35Z) - PREDATOR: Registration of 3D Point Clouds with Low Overlap [29.285040521765353]
PREDATOR is a model for pairwise point-cloud registration with deep attention to the overlap region.
It raises the rate of successful registrations by more than 20% in the low-overlap scenario.
It also sets a new state of the art for the 3DMatch benchmark with 89% registration recall.
arXiv Detail & Related papers (2020-11-25T20:25:03Z) - D3Feat: Joint Learning of Dense Detection and Description of 3D Local
Features [51.04841465193678]
We leverage a 3D fully convolutional network for 3D point clouds.
We propose a novel and practical learning mechanism that densely predicts both a detection score and a description feature for each 3D point.
Our method achieves state-of-the-art results in both indoor and outdoor scenarios.
arXiv Detail & Related papers (2020-03-06T12:51:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.