RGBD-Glue: General Feature Combination for Robust RGB-D Point Cloud Registration
- URL: http://arxiv.org/abs/2405.07594v2
- Date: Wed, 21 Aug 2024 05:32:47 GMT
- Title: RGBD-Glue: General Feature Combination for Robust RGB-D Point Cloud Registration
- Authors: Congjia Chen, Xiaoyu Jia, Yanhong Zheng, Yufu Qu,
- Abstract summary: We propose a new feature combination framework, which applies a looser but more effective combination.
An explicit filter based on transformation consistency is designed for the combination framework, which can overcome each feature's weakness.
Experiments on ScanNet and 3DMatch show that our method achieves a state-of-the-art performance.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Point cloud registration is a fundamental task for estimating rigid transformations between point clouds. Previous studies have used geometric information for extracting features, matching and estimating transformation. Recently, owing to the advancement of RGB-D sensors, researchers have attempted to combine visual and geometric information to improve registration performance. However, these studies focused on extracting distinctive features by deep feature fusion, which cannot effectively solve the negative effects of each feature's weakness, and cannot sufficiently leverage the valid information. In this paper, we propose a new feature combination framework, which applies a looser but more effective combination. An explicit filter based on transformation consistency is designed for the combination framework, which can overcome each feature's weakness. And an adaptive threshold determined by the error distribution is proposed to extract more valid information from the two types of features. Owing to the distinctive design, our proposed framework can estimate more accurate correspondences and is applicable to both hand-crafted and learning-based feature descriptors. Experiments on ScanNet and 3DMatch show that our method achieves a state-of-the-art performance.
Related papers
- A Lightweight 3D Anomaly Detection Method with Rotationally Invariant Features [60.76577388438418]
3D anomaly detection (AD) is a crucial task in computer vision, aiming to identify anomalous points or regions from point cloud data.<n>Existing methods may encounter challenges when handling point clouds with changes in orientation and position because the resulting features may vary significantly.<n>We propose a novel Rotationally Invariant Features (RIF) framework for 3D AD, which maps each point into a rotationally invariant space to maintain consistency of representation.
arXiv Detail & Related papers (2025-11-17T08:16:05Z) - Registration is a Powerful Rotation-Invariance Learner for 3D Anomaly Detection [64.0168648353038]
3D anomaly detection in point-cloud data is critical for industrial quality control, aiming to identify structural defects with high reliability.<n>Current memory bank-based methods often suffer from inconsistent feature transformations and limited discriminative capacity.<n>We propose a registration-induced, rotation-invariant feature extraction framework that integrates the objectives of point-cloud registration and memory-based anomaly detection.
arXiv Detail & Related papers (2025-10-19T14:56:38Z) - Source-Free Object Detection with Detection Transformer [59.33653163035064]
Source-Free Object Detection (SFOD) enables knowledge transfer from a source domain to an unsupervised target domain for object detection without access to source data.<n>Most existing SFOD approaches are either confined to conventional object detection (OD) models like Faster R-CNN or designed as general solutions without tailored adaptations for novel OD architectures, especially Detection Transformer (DETR)<n>In this paper, we introduce Feature Reweighting ANd Contrastive Learning NetworK (FRANCK), a novel SFOD framework specifically designed to perform query-centric feature enhancement for DETRs.
arXiv Detail & Related papers (2025-10-13T07:35:04Z) - Focus Through Motion: RGB-Event Collaborative Token Sparsification for Efficient Object Detection [56.88160531995454]
Existing RGB-Event detection methods process the low-information regions of both modalities uniformly during feature extraction and fusion.<n>We propose FocusMamba, which performs adaptive collaborative sparsification of multimodal features.<n>Experiments on the DSEC-Det and PKU-DAVIS-SOD datasets demonstrate that the proposed method achieves superior performance in both accuracy and efficiency.
arXiv Detail & Related papers (2025-09-04T04:18:46Z) - Cross-modal feature fusion for robust point cloud registration with ambiguous geometry [6.742883954812066]
We propose a novel Cross-modal Feature Fusion method for point cloud registration.<n>It incorporates a two-stage fusion of 3D point cloud features and 2D image features.<n>It achieves state-of-the-art registration performance across all benchmarks.
arXiv Detail & Related papers (2025-05-19T13:22:46Z) - A Consistency-Aware Spot-Guided Transformer for Versatile and Hierarchical Point Cloud Registration [9.609585217048664]
We develop a consistency-aware spot-guided Transformer (CAST)
CAST incorporates a spot-guided cross-attention module to avoid interfering with irrelevant areas.
A lightweight fine matching module for both sparse keypoints and dense features can estimate the transformation accurately.
arXiv Detail & Related papers (2024-10-14T08:48:25Z) - V2X-AHD:Vehicle-to-Everything Cooperation Perception via Asymmetric
Heterogenous Distillation Network [13.248981195106069]
We propose a multi-view vehicle-road cooperation perception system, vehicle-to-everything cooperative perception (V2X-AHD)
The V2X-AHD can effectively improve the accuracy of 3D object detection and reduce the number of network parameters, according to this study.
arXiv Detail & Related papers (2023-10-10T13:12:03Z) - Enhancing Deformable Local Features by Jointly Learning to Detect and
Describe Keypoints [8.390939268280235]
Local feature extraction is a standard approach in computer vision for tackling important tasks such as image matching and retrieval.
We propose DALF, a novel deformation-aware network for jointly detecting and describing keypoints.
Our approach also enhances the performance of two real-world applications: deformable object retrieval and non-rigid 3D surface registration.
arXiv Detail & Related papers (2023-04-02T18:01:51Z) - Improving RGB-D Point Cloud Registration by Learning Multi-scale Local
Linear Transformation [38.64501645574878]
Point cloud registration aims at estimating the geometric transformation between two point cloud scans.
Recent point cloud registration methods have tried to apply RGB-D data to achieve more accurate correspondence.
We propose a new Geometry-Aware Visual Feature Extractor (GAVE) that employs multi-scale local linear transformation.
arXiv Detail & Related papers (2022-08-31T14:36:09Z) - AGO-Net: Association-Guided 3D Point Cloud Object Detection Network [86.10213302724085]
We propose a novel 3D detection framework that associates intact features for objects via domain adaptation.
We achieve new state-of-the-art performance on the KITTI 3D detection benchmark in both accuracy and speed.
arXiv Detail & Related papers (2022-08-24T16:54:38Z) - RGB-D Saliency Detection via Cascaded Mutual Information Minimization [122.8879596830581]
Existing RGB-D saliency detection models do not explicitly encourage RGB and depth to achieve effective multi-modal learning.
We introduce a novel multi-stage cascaded learning framework via mutual information minimization to "explicitly" model the multi-modal information between RGB image and depth data.
arXiv Detail & Related papers (2021-09-15T12:31:27Z) - UPDesc: Unsupervised Point Descriptor Learning for Robust Registration [54.95201961399334]
UPDesc is an unsupervised method to learn point descriptors for robust point cloud registration.
We show that our learned descriptors yield superior performance over existing unsupervised methods.
arXiv Detail & Related papers (2021-08-05T17:11:08Z) - Progressively Guided Alternate Refinement Network for RGB-D Salient
Object Detection [63.18846475183332]
We aim to develop an efficient and compact deep network for RGB-D salient object detection.
We propose a progressively guided alternate refinement network to refine it.
Our model outperforms existing state-of-the-art approaches by a large margin.
arXiv Detail & Related papers (2020-08-17T02:55:06Z) - RGB-D Salient Object Detection with Cross-Modality Modulation and
Selection [126.4462739820643]
We present an effective method to progressively integrate and refine the cross-modality complementarities for RGB-D salient object detection (SOD)
The proposed network mainly solves two challenging issues: 1) how to effectively integrate the complementary information from RGB image and its corresponding depth map, and 2) how to adaptively select more saliency-related features.
arXiv Detail & Related papers (2020-07-14T14:22:50Z) - Hierarchical Dynamic Filtering Network for RGB-D Salient Object
Detection [91.43066633305662]
The main purpose of RGB-D salient object detection (SOD) is how to better integrate and utilize cross-modal fusion information.
In this paper, we explore these issues from a new perspective.
We implement a kind of more flexible and efficient multi-scale cross-modal feature processing.
arXiv Detail & Related papers (2020-07-13T07:59:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.