PCAM: Product of Cross-Attention Matrices for Rigid Registration of
Point Clouds
- URL: http://arxiv.org/abs/2110.01269v1
- Date: Mon, 4 Oct 2021 09:23:27 GMT
- Title: PCAM: Product of Cross-Attention Matrices for Rigid Registration of
Point Clouds
- Authors: Anh-Quan Cao and Gilles Puy and Alexandre Boulch and Renaud Marlet
- Abstract summary: PCAM is a neural network whose key element is a pointwise product of cross-attention matrices.
We show that PCAM achieves state-of-the-art results among methods which, like us, solve steps (a) and (b) jointly via deepnets.
- Score: 79.99653758293277
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Rigid registration of point clouds with partial overlaps is a longstanding
problem usually solved in two steps: (a) finding correspondences between the
point clouds; (b) filtering these correspondences to keep only the most
reliable ones to estimate the transformation. Recently, several deep nets have
been proposed to solve these steps jointly. We built upon these works and
propose PCAM: a neural network whose key element is a pointwise product of
cross-attention matrices that permits to mix both low-level geometric and
high-level contextual information to find point correspondences. These
cross-attention matrices also permits the exchange of context information
between the point clouds, at each layer, allowing the network construct better
matching features within the overlapping regions. The experiments show that
PCAM achieves state-of-the-art results among methods which, like us, solve
steps (a) and (b) jointly via deepnets. Our code and trained models are
available at https://github.com/valeoai/PCAM.
Related papers
- Multiway Point Cloud Mosaicking with Diffusion and Global Optimization [74.3802812773891]
We introduce a novel framework for multiway point cloud mosaicking (named Wednesday)
At the core of our approach is ODIN, a learned pairwise registration algorithm that identifies overlaps and refines attention scores.
Tested on four diverse, large-scale datasets, our method state-of-the-art pairwise and rotation registration results by a large margin on all benchmarks.
arXiv Detail & Related papers (2024-03-30T17:29:13Z) - SGNet: Salient Geometric Network for Point Cloud Registration [35.49985932039906]
Point Cloud Registration (PCR) is a critical and challenging task in computer vision.
Previous methods have encountered challenges with ambiguous matching due to similarity among patch blocks.
We propose a new framework that includes several novel techniques.
arXiv Detail & Related papers (2023-09-12T13:21:12Z) - Quantity-Aware Coarse-to-Fine Correspondence for Image-to-Point Cloud
Registration [4.954184310509112]
Image-to-point cloud registration aims to determine the relative camera pose between an RGB image and a reference point cloud.
Matching individual points with pixels can be inherently ambiguous due to modality gaps.
We propose a framework to capture quantity-aware correspondences between local point sets and pixel patches.
arXiv Detail & Related papers (2023-07-14T03:55:54Z) - HybridFusion: LiDAR and Vision Cross-Source Point Cloud Fusion [15.94976936555104]
We propose a cross-source point cloud fusion algorithm called HybridFusion.
It can register cross-source dense point clouds from different viewing angle in outdoor large scenes.
The proposed approach is evaluated comprehensively through qualitative and quantitative experiments.
arXiv Detail & Related papers (2023-04-10T10:54:54Z) - REGTR: End-to-end Point Cloud Correspondences with Transformers [79.52112840465558]
We conjecture that attention mechanisms can replace the role of explicit feature matching and RANSAC.
We propose an end-to-end framework to directly predict the final set of correspondences.
Our approach achieves state-of-the-art performance on 3DMatch and ModelNet benchmarks.
arXiv Detail & Related papers (2022-03-28T06:01:00Z) - DFC: Deep Feature Consistency for Robust Point Cloud Registration [0.4724825031148411]
We present a novel learning-based alignment network for complex alignment scenes.
We validate our approach on the 3DMatch dataset and the KITTI odometry dataset.
arXiv Detail & Related papers (2021-11-15T08:27:21Z) - DeepI2P: Image-to-Point Cloud Registration via Deep Classification [71.3121124994105]
DeepI2P is a novel approach for cross-modality registration between an image and a point cloud.
Our method estimates the relative rigid transformation between the coordinate frames of the camera and Lidar.
We circumvent the difficulty by converting the registration problem into a classification and inverse camera projection optimization problem.
arXiv Detail & Related papers (2021-04-08T04:27:32Z) - DeepCLR: Correspondence-Less Architecture for Deep End-to-End Point
Cloud Registration [12.471564670462344]
This work addresses the problem of point cloud registration using deep neural networks.
We propose an approach to predict the alignment between two point clouds with overlapping data content, but displaced origins.
Our approach achieves state-of-the-art accuracy and the lowest run-time of the compared methods.
arXiv Detail & Related papers (2020-07-22T08:20:57Z) - RPM-Net: Robust Point Matching using Learned Features [79.52112840465558]
RPM-Net is a less sensitive and more robust deep learning-based approach for rigid point cloud registration.
Unlike some existing methods, our RPM-Net handles missing correspondences and point clouds with partial visibility.
arXiv Detail & Related papers (2020-03-30T13:45:27Z) - Saliency Enhancement using Gradient Domain Edges Merging [65.90255950853674]
We develop a method to merge the edges with the saliency maps to improve the performance of the saliency.
This leads to our proposed saliency enhancement using edges (SEE) with an average improvement of at least 3.4 times higher on the DUT-OMRON dataset.
The SEE algorithm is split into 2 parts, SEE-Pre for preprocessing and SEE-Post pour postprocessing.
arXiv Detail & Related papers (2020-02-11T14:04:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.