Indoor simultaneous localization and mapping based on fringe projection
profilometry
- URL: http://arxiv.org/abs/2204.11020v1
- Date: Sat, 23 Apr 2022 08:35:58 GMT
- Title: Indoor simultaneous localization and mapping based on fringe projection
profilometry
- Authors: Yang Zhao, Kai Zhang, Haotian Yu, Yi Zhang, Dongliang Zheng, Jing Han
- Abstract summary: We propose a novel FPP-based indoor SLAM method based on the coordinate transformation relationship of FPP.
The proposed indoor SLAM can achieve the localization and mapping accuracy around one millimeter.
- Score: 17.58921454201053
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Simultaneous Localization and Mapping (SLAM) plays an important role in
outdoor and indoor applications ranging from autonomous driving to indoor
robotics. Outdoor SLAM has been widely used with the assistance of LiDAR or
GPS. For indoor applications, the LiDAR technique does not satisfy the accuracy
requirement and the GPS signals will be lost. An accurate and efficient scene
sensing technique is required for indoor SLAM. As the most promising 3D sensing
technique, the opportunities for indoor SLAM with fringe projection
profilometry (FPP) systems are obvious, but methods to date have not fully
leveraged the accuracy and speed of sensing that such systems offer. In this
paper, we propose a novel FPP-based indoor SLAM method based on the coordinate
transformation relationship of FPP, where the 2D-to-3D descriptor-assisted is
used for mapping and localization. The correspondences generated by matching
descriptors are used for fast and accurate mapping, and the transform
estimation between the 2D and 3D descriptors is used to localize the sensor.
The provided experimental results demonstrate that the proposed indoor SLAM can
achieve the localization and mapping accuracy around one millimeter.
Related papers
- EM-GANSim: Real-time and Accurate EM Simulation Using Conditional GANs for 3D Indoor Scenes [55.2480439325792]
We present a novel machine-learning (ML) approach (EM-GANSim) for real-time electromagnetic (EM) propagation.
In practice, it can compute the signal strength in a few milliseconds on any location in 3D indoor environments.
arXiv Detail & Related papers (2024-05-27T17:19:02Z) - MM3DGS SLAM: Multi-modal 3D Gaussian Splatting for SLAM Using Vision, Depth, and Inertial Measurements [59.70107451308687]
We show for the first time that using 3D Gaussians for map representation with unposed camera images and inertial measurements can enable accurate SLAM.
Our method, MM3DGS, addresses the limitations of prior rendering by enabling faster scale awareness, and improved trajectory tracking.
We also release a multi-modal dataset, UT-MM, collected from a mobile robot equipped with a camera and an inertial measurement unit.
arXiv Detail & Related papers (2024-04-01T04:57:41Z) - PIN-SLAM: LiDAR SLAM Using a Point-Based Implicit Neural Representation for Achieving Global Map Consistency [30.5868776990673]
PIN-SLAM is a system for building globally consistent maps based on an elastic and compact point-based implicit neural map representation.
Our implicit map is based on sparse optimizable neural points, which are inherently elastic and deformable with the global pose adjustment when closing a loop.
PIN-SLAM achieves pose estimation accuracy better or on par with the state-of-the-art LiDAR odometry or SLAM systems.
arXiv Detail & Related papers (2024-01-17T10:06:12Z) - UnLoc: A Universal Localization Method for Autonomous Vehicles using
LiDAR, Radar and/or Camera Input [51.150605800173366]
UnLoc is a novel unified neural modeling approach for localization with multi-sensor input in all weather conditions.
Our method is extensively evaluated on Oxford Radar RobotCar, ApolloSouthBay and Perth-WA datasets.
arXiv Detail & Related papers (2023-07-03T04:10:55Z) - UncLe-SLAM: Uncertainty Learning for Dense Neural SLAM [60.575435353047304]
We present an uncertainty learning framework for dense neural simultaneous localization and mapping (SLAM)
We propose an online framework for sensor uncertainty estimation that can be trained in a self-supervised manner from only 2D input data.
arXiv Detail & Related papers (2023-06-19T16:26:25Z) - ESLAM: Efficient Dense SLAM System Based on Hybrid Representation of
Signed Distance Fields [2.0625936401496237]
ESLAM reads RGB-D frames with unknown camera poses in a sequential manner and incrementally reconstructs the scene representation.
ESLAM improves the accuracy of 3D reconstruction and camera localization of state-of-the-art dense visual SLAM methods by more than 50%.
arXiv Detail & Related papers (2022-11-21T18:25:14Z) - Semi-Perspective Decoupled Heatmaps for 3D Robot Pose Estimation from
Depth Maps [66.24554680709417]
Knowing the exact 3D location of workers and robots in a collaborative environment enables several real applications.
We propose a non-invasive framework based on depth devices and deep neural networks to estimate the 3D pose of robots from an external camera.
arXiv Detail & Related papers (2022-07-06T08:52:12Z) - FD-SLAM: 3-D Reconstruction Using Features and Dense Matching [18.577229381683434]
We propose an RGB-D SLAM system that uses dense frame-to-model odometry to build accurate sub-maps.
We incorporate a learning-based loop closure component based on 3-D features which further stabilises map building.
The approach can also scale to large scenes where other systems often fail.
arXiv Detail & Related papers (2022-03-25T18:58:46Z) - Phase-SLAM: Phase Based Simultaneous Localization and Mapping for Mobile
Structured Light Illumination Systems [14.9174946109114]
Phase-SLAM is a framework for fast and accurate SLI sensor pose estimation and 3D object reconstruction.
We build datasets from both a simulation platform and a robotic arm based SLI system in real-world to verify the proposed approach.
Experiment results demonstrate that the proposed Phase-SLAM outperforms other state-of-the-art methods in terms of pose estimation and 3D reconstruction.
arXiv Detail & Related papers (2022-01-22T13:47:06Z) - Uncertainty-Aware Camera Pose Estimation from Points and Lines [101.03675842534415]
Perspective-n-Point-and-Line (Pn$PL) aims at fast, accurate and robust camera localizations with respect to a 3D model from 2D-3D feature coordinates.
arXiv Detail & Related papers (2021-07-08T15:19:36Z) - LatentSLAM: unsupervised multi-sensor representation learning for
localization and mapping [7.857987850592964]
We propose an unsupervised representation learning method that yields low-dimensional latent state descriptors.
Our method is sensor agnostic and can be applied to any sensor modality.
We show how combining multiple sensors can increase the robustness, by reducing the number of false matches.
arXiv Detail & Related papers (2021-05-07T13:44:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.