OmniSLAM: Omnidirectional Localization and Dense Mapping for
Wide-baseline Multi-camera Systems
- URL: http://arxiv.org/abs/2003.08056v1
- Date: Wed, 18 Mar 2020 05:52:10 GMT
- Title: OmniSLAM: Omnidirectional Localization and Dense Mapping for
Wide-baseline Multi-camera Systems
- Authors: Changhee Won, Hochang Seok, Zhaopeng Cui, Marc Pollefeys, Jongwoo Lim
- Abstract summary: We present an omnidirectional localization and dense mapping system for a wide-baseline multiview stereo setup with ultra-wide field-of-view (FOV) fisheye cameras.
For more practical and accurate reconstruction, we first introduce improved and light-weighted deep neural networks for the omnidirectional depth estimation.
We integrate our omnidirectional depth estimates into the visual odometry (VO) and add a loop closing module for global consistency.
- Score: 88.41004332322788
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we present an omnidirectional localization and dense mapping
system for a wide-baseline multiview stereo setup with ultra-wide field-of-view
(FOV) fisheye cameras, which has a 360 degrees coverage of stereo observations
of the environment. For more practical and accurate reconstruction, we first
introduce improved and light-weighted deep neural networks for the
omnidirectional depth estimation, which are faster and more accurate than the
existing networks. Second, we integrate our omnidirectional depth estimates
into the visual odometry (VO) and add a loop closing module for global
consistency. Using the estimated depth map, we reproject keypoints onto each
other view, which leads to a better and more efficient feature matching
process. Finally, we fuse the omnidirectional depth maps and the estimated rig
poses into the truncated signed distance function (TSDF) volume to acquire a 3D
map. We evaluate our method on synthetic datasets with ground-truth and
real-world sequences of challenging environments, and the extensive experiments
show that the proposed system generates excellent reconstruction results in
both synthetic and real-world environments.
Related papers
- ARAI-MVSNet: A multi-view stereo depth estimation network with adaptive
depth range and depth interval [19.28042366225802]
Multi-View Stereo(MVS) is a fundamental problem in geometric computer vision.
We present a novel multi-stage coarse-to-fine framework to achieve adaptive all-pixel depth range and depth interval.
Our model achieves state-of-the-art performance and yields competitive generalization ability.
arXiv Detail & Related papers (2023-08-17T14:52:11Z) - SimpleMapping: Real-Time Visual-Inertial Dense Mapping with Deep
Multi-View Stereo [13.535871843518953]
We present a real-time visual-inertial dense mapping method with high quality using only monocular images and IMU readings.
We propose a sparse point aided stereo neural network (SPA-MVSNet) that can effectively leverage the informative but noisy sparse points from the VIO system.
Our proposed dense mapping system achieves a 39.7% improvement in F-score over existing systems when evaluated on the challenging scenarios of the EuRoC dataset.
arXiv Detail & Related papers (2023-06-14T17:28:45Z) - Neural Implicit Dense Semantic SLAM [83.04331351572277]
We propose a novel RGBD vSLAM algorithm that learns a memory-efficient, dense 3D geometry, and semantic segmentation of an indoor scene in an online manner.
Our pipeline combines classical 3D vision-based tracking and loop closing with neural fields-based mapping.
Our proposed algorithm can greatly enhance scene perception and assist with a range of robot control problems.
arXiv Detail & Related papers (2023-04-27T23:03:52Z) - Multi-View Guided Multi-View Stereo [39.116228971420874]
This paper introduces a novel deep framework for dense 3D reconstruction from multiple image frames.
Given a deep multi-view stereo network, our framework uses sparse depth hints to guide the neural network.
We evaluate our Multi-View Guided framework within a variety of state-of-the-art deep multi-view stereo networks.
arXiv Detail & Related papers (2022-10-20T17:59:18Z) - 3DVNet: Multi-View Depth Prediction and Volumetric Refinement [68.68537312256144]
3DVNet is a novel multi-view stereo (MVS) depth-prediction method.
Our key idea is the use of a 3D scene-modeling network that iteratively updates a set of coarse depth predictions.
We show that our method exceeds state-of-the-art accuracy in both depth prediction and 3D reconstruction metrics.
arXiv Detail & Related papers (2021-12-01T00:52:42Z) - TANDEM: Tracking and Dense Mapping in Real-time using Deep Multi-view
Stereo [55.30992853477754]
We present TANDEM, a real-time monocular tracking and dense framework.
For pose estimation, TANDEM performs photometric bundle adjustment based on a sliding window of alignments.
TANDEM shows state-of-the-art real-time 3D reconstruction performance.
arXiv Detail & Related papers (2021-11-14T19:01:02Z) - VolumeFusion: Deep Depth Fusion for 3D Scene Reconstruction [71.83308989022635]
In this paper, we advocate that replicating the traditional two stages framework with deep neural networks improves both the interpretability and the accuracy of the results.
Our network operates in two steps: 1) the local computation of the local depth maps with a deep MVS technique, and, 2) the depth maps and images' features fusion to build a single TSDF volume.
In order to improve the matching performance between images acquired from very different viewpoints, we introduce a rotation-invariant 3D convolution kernel called PosedConv.
arXiv Detail & Related papers (2021-08-19T11:33:58Z) - Multi-view Depth Estimation using Epipolar Spatio-Temporal Networks [87.50632573601283]
We present a novel method for multi-view depth estimation from a single video.
Our method achieves temporally coherent depth estimation results by using a novel Epipolar Spatio-Temporal (EST) transformer.
To reduce the computational cost, inspired by recent Mixture-of-Experts models, we design a compact hybrid network.
arXiv Detail & Related papers (2020-11-26T04:04:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.