RE-TRIP : Reflectivity Instance Augmented Triangle Descriptor for 3D Place Recognition
- URL: http://arxiv.org/abs/2505.16165v1
- Date: Thu, 22 May 2025 03:11:30 GMT
- Title: RE-TRIP : Reflectivity Instance Augmented Triangle Descriptor for 3D Place Recognition
- Authors: Yechan Park, Gyuhyeon Pak, Euntai Kim,
- Abstract summary: We propose a novel descriptor for 3D Place Recognition, named RE-TRIP.<n>This new descriptor leverages both geometric measurements and reflectivity to enhance robustness.<n>We conduct a series of experiments to demonstrate the effectiveness of RE-TRIP.
- Score: 14.095215136905553
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While most people associate LiDAR primarily with its ability to measure distances and provide geometric information about the environment (via point clouds), LiDAR also captures additional data, including reflectivity or intensity values. Unfortunately, when LiDAR is applied to Place Recognition (PR) in mobile robotics, most previous works on LiDAR-based PR rely only on geometric measurements, neglecting the additional reflectivity information that LiDAR provides. In this paper, we propose a novel descriptor for 3D PR, named RE-TRIP (REflectivity-instance augmented TRIangle descriPtor). This new descriptor leverages both geometric measurements and reflectivity to enhance robustness in challenging scenarios such as geometric degeneracy, high geometric similarity, and the presence of dynamic objects. To implement RE-TRIP in real-world applications, we further propose (1) a keypoint extraction method, (2) a key instance segmentation method, (3) a RE-TRIP matching method, and (4) a reflectivity-combined loop verification method. Finally, we conduct a series of experiments to demonstrate the effectiveness of RE-TRIP. Applied to public datasets (i.e., HELIPR, FusionPortable) containing diverse scenarios such as long corridors, bridges, large-scale urban areas, and highly dynamic environments -- our experimental results show that the proposed method outperforms existing state-of-the-art methods in terms of Scan Context, Intensity Scan Context, and STD.
Related papers
- Pseudo Depth Meets Gaussian: A Feed-forward RGB SLAM Baseline [64.42938561167402]
We propose an online 3D reconstruction method using 3D Gaussian-based SLAM, combined with a feed-forward recurrent prediction module.<n>This approach replaces slow test-time optimization with fast network inference, significantly improving tracking speed.<n>Our method achieves performance on par with the state-of-the-art SplaTAM, while reducing tracking time by more than 90%.
arXiv Detail & Related papers (2025-08-06T16:16:58Z) - Cross-Modal Geometric Hierarchy Fusion: An Implicit-Submap Driven Framework for Resilient 3D Place Recognition [4.196626042312499]
We propose a novel framework that redefines 3D place recognition through density-agnostic geometric reasoning.<n>Specifically, we introduce an implicit 3D representation based on elastic points, which is immune to the interference of original scene point cloud density.<n>With the aid of these two types of information, we obtain descriptors that fuse geometric information from both bird's-eye view and 3D segment perspectives.
arXiv Detail & Related papers (2025-06-17T07:04:07Z) - Joint Depth and Reflectivity Estimation using Single-Photon LiDAR [9.842115005951651]
Single-Photon Light Detection and Ranging (SP-LiDAR) is emerging as a leading technology for high-precision 3D vision tasks.<n> timestamps encode two complementary pieces of information: pulse travel time (depth) and the number of photons reflected by the object (reflectivity)
arXiv Detail & Related papers (2025-05-19T15:33:28Z) - LaRI: Layered Ray Intersections for Single-view 3D Geometric Reasoning [75.9814389360821]
layered ray intersections (LaRI) is a new method for unseen geometry reasoning from a single image.<n>Benefiting from the compact and layered representation, LaRI enables complete, efficient, and view-aligned geometric reasoning.<n>We build a complete training data generation pipeline for synthetic and real-world data, including 3D objects and scenes.
arXiv Detail & Related papers (2025-04-25T15:31:29Z) - LiDAR-GS:Real-time LiDAR Re-Simulation using Gaussian Splatting [50.808933338389686]
We present LiDAR-GS, a real-time, high-fidelity re-simulation of LiDAR scans in public urban road scenes.<n>The method achieves state-of-the-art results in both rendering frame rate and quality on publically available large scene datasets.
arXiv Detail & Related papers (2024-10-07T15:07:56Z) - GauU-Scene V2: Assessing the Reliability of Image-Based Metrics with Expansive Lidar Image Dataset Using 3DGS and NeRF [2.4673377627220323]
We introduce a novel, multimodal large-scale scene reconstruction benchmark that utilizes newly developed 3D representation approaches.
GauU-Scene encompasses over 6.5 square kilometers and features a comprehensive RGB dataset coupled with LiDAR ground truth.
We are the first to propose a LiDAR and image alignment method for a drone-based dataset.
arXiv Detail & Related papers (2024-04-07T08:51:31Z) - Reflectivity Is All You Need!: Advancing LiDAR Semantic Segmentation [11.684330305297523]
This paper explores the advantages of employing calibrated intensity (also referred to as reflectivity) within learning-based LiDAR semantic segmentation frameworks.
We show that replacing intensity with reflectivity results in a 4% improvement in mean Intersection over Union for off-road scenarios.
We demonstrate the potential benefits of using calibrated intensity for semantic segmentation in urban environments.
arXiv Detail & Related papers (2024-03-19T22:57:03Z) - RGB-based Category-level Object Pose Estimation via Decoupled Metric
Scale Recovery [72.13154206106259]
We propose a novel pipeline that decouples the 6D pose and size estimation to mitigate the influence of imperfect scales on rigid transformations.
Specifically, we leverage a pre-trained monocular estimator to extract local geometric information.
A separate branch is designed to directly recover the metric scale of the object based on category-level statistics.
arXiv Detail & Related papers (2023-09-19T02:20:26Z) - Few-shot Non-line-of-sight Imaging with Signal-surface Collaborative
Regularization [18.466941045530408]
Non-line-of-sight imaging technique aims to reconstruct targets from multiply reflected light.
We propose a signal-surface collaborative regularization framework that provides noise-robust reconstructions with a minimal number of measurements.
Our approach has great potential in real-time non-line-of-sight imaging applications such as rescue operations and autonomous driving.
arXiv Detail & Related papers (2022-11-21T11:19:20Z) - Ret3D: Rethinking Object Relations for Efficient 3D Object Detection in
Driving Scenes [82.4186966781934]
We introduce a simple, efficient, and effective two-stage detector, termed as Ret3D.
At the core of Ret3D is the utilization of novel intra-frame and inter-frame relation modules.
With negligible extra overhead, Ret3D achieves the state-of-the-art performance.
arXiv Detail & Related papers (2022-08-18T03:48:58Z) - Boosting 3D Object Detection by Simulating Multimodality on Point Clouds [51.87740119160152]
This paper presents a new approach to boost a single-modality (LiDAR) 3D object detector by teaching it to simulate features and responses that follow a multi-modality (LiDAR-image) detector.
The approach needs LiDAR-image data only when training the single-modality detector, and once well-trained, it only needs LiDAR data at inference.
Experimental results on the nuScenes dataset show that our approach outperforms all SOTA LiDAR-only 3D detectors.
arXiv Detail & Related papers (2022-06-30T01:44:30Z) - DONet: Learning Category-Level 6D Object Pose and Size Estimation from
Depth Observation [53.55300278592281]
We propose a method of Category-level 6D Object Pose and Size Estimation (COPSE) from a single depth image.
Our framework makes inferences based on the rich geometric information of the object in the depth channel alone.
Our framework competes with state-of-the-art approaches that require labeled real-world images.
arXiv Detail & Related papers (2021-06-27T10:41:50Z) - Intensity Scan Context: Coding Intensity and Geometry Relations for Loop
Closure Detection [26.209412893744094]
Loop closure detection is an essential and challenging problem in simultaneous localization and mapping (SLAM)
Existing works on 3D loop closure detection often leverage the matching of local or global geometrical-only descriptors.
We propose a novel global descriptor, intensity scan context (ISC), that explores both geometry and intensity characteristics.
arXiv Detail & Related papers (2020-03-12T08:11:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.