Phase-SLAM: Phase Based Simultaneous Localization and Mapping for Mobile
Structured Light Illumination Systems
- URL: http://arxiv.org/abs/2201.09048v1
- Date: Sat, 22 Jan 2022 13:47:06 GMT
- Title: Phase-SLAM: Phase Based Simultaneous Localization and Mapping for Mobile
Structured Light Illumination Systems
- Authors: Xi Zheng, Rui Ma, Rui Gao, and Qi Hao
- Abstract summary: Phase-SLAM is a framework for fast and accurate SLI sensor pose estimation and 3D object reconstruction.
We build datasets from both a simulation platform and a robotic arm based SLI system in real-world to verify the proposed approach.
Experiment results demonstrate that the proposed Phase-SLAM outperforms other state-of-the-art methods in terms of pose estimation and 3D reconstruction.
- Score: 14.9174946109114
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Structured Light Illumination (SLI) systems have been used for reliable
indoor dense 3D scanning via phase triangulation. However, mobile SLI systems
for 360 degree 3D reconstruction demand 3D point cloud registration, involving
high computational complexity. In this paper, we propose a phase based
Simultaneous Localization and Mapping (Phase-SLAM) framework for fast and
accurate SLI sensor pose estimation and 3D object reconstruction. The novelty
of this work is threefold: (1) developing a reprojection model from 3D points
to 2D phase data towards phase registration with low computational complexity;
(2) developing a local optimizer to achieve SLI sensor pose estimation
(odometry) using the derived Jacobian matrix for the 6 DoF variables; (3)
developing a compressive phase comparison method to achieve high-efficiency
loop closure detection. The whole Phase-SLAM pipeline is then exploited using
existing global pose graph optimization techniques. We build datasets from both
the unreal simulation platform and a robotic arm based SLI system in real-world
to verify the proposed approach. The experiment results demonstrate that the
proposed Phase-SLAM outperforms other state-of-the-art methods in terms of the
efficiency and accuracy of pose estimation and 3D reconstruction. The
open-source code is available at https://github.com/ZHENGXi-git/Phase-SLAM.
Related papers
- PF3plat: Pose-Free Feed-Forward 3D Gaussian Splatting [54.7468067660037]
PF3plat sets a new state-of-the-art across all benchmarks, supported by comprehensive ablation studies validating our design choices.
Our framework capitalizes on fast speed, scalability, and high-quality 3D reconstruction and view synthesis capabilities of 3DGS.
arXiv Detail & Related papers (2024-10-29T15:28:15Z) - Visual SLAM with 3D Gaussian Primitives and Depth Priors Enabling Novel View Synthesis [11.236094544193605]
Conventional geometry-based SLAM systems lack dense 3D reconstruction capabilities.
We propose a real-time RGB-D SLAM system that incorporates a novel view synthesis technique, 3D Gaussian Splatting.
arXiv Detail & Related papers (2024-08-10T21:23:08Z) - IG-SLAM: Instant Gaussian SLAM [6.228980850646457]
3D Gaussian Splatting has recently shown promising results as an alternative scene representation in SLAM systems.
We present IG-SLAM, a dense RGB-only SLAM system that employs robust Dense-SLAM methods for tracking and combines them with Gaussian Splatting.
We demonstrate competitive performance with state-of-the-art RGB-only SLAM systems while achieving faster operation speeds.
arXiv Detail & Related papers (2024-08-02T09:07:31Z) - Q-SLAM: Quadric Representations for Monocular SLAM [89.05457684629621]
Monocular SLAM has long grappled with the challenge of accurately modeling 3D geometries.
Recent advances in Neural Radiance Fields (NeRF)-based monocular SLAM have shown promise.
We propose a novel approach that reimagines volumetric representations through the lens of quadric forms.
arXiv Detail & Related papers (2024-03-12T23:27:30Z) - GS-SLAM: Dense Visual SLAM with 3D Gaussian Splatting [51.96353586773191]
We introduce textbfGS-SLAM that first utilizes 3D Gaussian representation in the Simultaneous Localization and Mapping system.
Our method utilizes a real-time differentiable splatting rendering pipeline that offers significant speedup to map optimization and RGB-D rendering.
Our method achieves competitive performance compared with existing state-of-the-art real-time methods on the Replica, TUM-RGBD datasets.
arXiv Detail & Related papers (2023-11-20T12:08:23Z) - GOOD: General Optimization-based Fusion for 3D Object Detection via
LiDAR-Camera Object Candidates [10.534984939225014]
3D object detection serves as the core basis of the perception tasks in autonomous driving.
Good is a general optimization-based fusion framework that can achieve satisfying detection without training additional models.
Experiments on both nuScenes and KITTI datasets are carried out and the results show that GOOD outperforms by 9.1% on mAP score compared with PointPillars.
arXiv Detail & Related papers (2023-03-17T07:05:04Z) - FD-SLAM: 3-D Reconstruction Using Features and Dense Matching [18.577229381683434]
We propose an RGB-D SLAM system that uses dense frame-to-model odometry to build accurate sub-maps.
We incorporate a learning-based loop closure component based on 3-D features which further stabilises map building.
The approach can also scale to large scenes where other systems often fail.
arXiv Detail & Related papers (2022-03-25T18:58:46Z) - Efficient 3D Deep LiDAR Odometry [16.388259779644553]
An efficient 3D point cloud learning architecture, named PWCLO-Net, is first proposed in this paper.
The entire architecture is holistically optimized end-to-end to achieve adaptive learning of cost volume and mask.
arXiv Detail & Related papers (2021-11-03T11:09:49Z) - Uncertainty-Aware Camera Pose Estimation from Points and Lines [101.03675842534415]
Perspective-n-Point-and-Line (Pn$PL) aims at fast, accurate and robust camera localizations with respect to a 3D model from 2D-3D feature coordinates.
arXiv Detail & Related papers (2021-07-08T15:19:36Z) - SCFusion: Real-time Incremental Scene Reconstruction with Semantic
Completion [86.77318031029404]
We propose a framework that performs scene reconstruction and semantic scene completion jointly in an incremental and real-time manner.
Our framework relies on a novel neural architecture designed to process occupancy maps and leverages voxel states to accurately and efficiently fuse semantic completion with the 3D global model.
arXiv Detail & Related papers (2020-10-26T15:31:52Z) - Lightweight Multi-View 3D Pose Estimation through Camera-Disentangled
Representation [57.11299763566534]
We present a solution to recover 3D pose from multi-view images captured with spatially calibrated cameras.
We exploit 3D geometry to fuse input images into a unified latent representation of pose, which is disentangled from camera view-points.
Our architecture then conditions the learned representation on camera projection operators to produce accurate per-view 2d detections.
arXiv Detail & Related papers (2020-04-05T12:52:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.