High-resolution Ecosystem Mapping in Repetitive Environments Using Dual
Camera SLAM
- URL: http://arxiv.org/abs/2201.03364v1
- Date: Mon, 10 Jan 2022 14:29:37 GMT
- Title: High-resolution Ecosystem Mapping in Repetitive Environments Using Dual
Camera SLAM
- Authors: Brian M. Hopkinson and Suchendra M. Bhandarkar
- Abstract summary: We propose a dual-camera SLAM approach that uses a forward facing wide-angle camera for localization and a downward facing narrower angle, high-resolution camera for documentation.
An experimental comparison with several state-of-the-art SfM approaches shows the dual-camera SLAM approach to perform better in repetitive environmental systems.
- Score: 18.15512110340033
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Structure from Motion (SfM) techniques are being increasingly used to create
3D maps from images in many domains including environmental monitoring.
However, SfM techniques are often confounded in visually repetitive
environments as they rely primarily on globally distinct image features.
Simultaneous Localization and Mapping (SLAM) techniques offer a potential
solution in visually repetitive environments since they use local feature
matching, but SLAM approaches work best with wide-angle cameras that are often
unsuitable for documenting the environmental system of interest. We resolve
this issue by proposing a dual-camera SLAM approach that uses a forward facing
wide-angle camera for localization and a downward facing narrower angle,
high-resolution camera for documentation. Video frames acquired by the forward
facing camera video are processed using a standard SLAM approach providing a
trajectory of the imaging system through the environment which is then used to
guide the registration of the documentation camera images. Fragmentary maps,
initially produced from the documentation camera images via monocular SLAM, are
subsequently scaled and aligned with the localization camera trajectory and
finally subjected to a global optimization procedure to produce a unified,
refined map. An experimental comparison with several state-of-the-art SfM
approaches shows the dual-camera SLAM approach to perform better in repetitive
environmental systems based on select samples of ground control point markers.
Related papers
- Multicam-SLAM: Non-overlapping Multi-camera SLAM for Indirect Visual Localization and Navigation [1.3654846342364308]
This paper presents a novel approach to visual simultaneous localization and mapping (SLAM) using multiple RGB-D cameras.
The proposed method, Multicam-SLAM, significantly enhances the robustness and accuracy of SLAM systems.
Experiments in various environments demonstrate the superior accuracy and robustness of the proposed method compared to conventional single-camera SLAM systems.
arXiv Detail & Related papers (2024-06-10T15:36:23Z) - Enhanced Stable View Synthesis [86.69338893753886]
We introduce an approach to enhance the novel view synthesis from images taken from a freely moving camera.
The introduced approach focuses on outdoor scenes where recovering accurate geometric scaffold and camera pose is challenging.
arXiv Detail & Related papers (2023-03-30T01:53:14Z) - RelPose: Predicting Probabilistic Relative Rotation for Single Objects
in the Wild [73.1276968007689]
We describe a data-driven method for inferring the camera viewpoints given multiple images of an arbitrary object.
We show that our approach outperforms state-of-the-art SfM and SLAM methods given sparse images on both seen and unseen categories.
arXiv Detail & Related papers (2022-08-11T17:59:59Z) - MC-Blur: A Comprehensive Benchmark for Image Deblurring [127.6301230023318]
In most real-world images, blur is caused by different factors, e.g., motion and defocus.
We construct a new large-scale multi-cause image deblurring dataset (called MC-Blur)
Based on the MC-Blur dataset, we conduct extensive benchmarking studies to compare SOTA methods in different scenarios.
arXiv Detail & Related papers (2021-12-01T02:10:42Z) - Dual-Camera Super-Resolution with Aligned Attention Modules [56.54073689003269]
We present a novel approach to reference-based super-resolution (RefSR) with the focus on dual-camera super-resolution (DCSR)
Our proposed method generalizes the standard patch-based feature matching with spatial alignment operations.
To bridge the domain gaps between real-world images and the training images, we propose a self-supervised domain adaptation strategy.
arXiv Detail & Related papers (2021-09-03T07:17:31Z) - Improved Real-Time Monocular SLAM Using Semantic Segmentation on
Selective Frames [15.455647477995312]
monocular simultaneous localization and mapping (SLAM) is emerging in advanced driver assistance systems and autonomous driving.
This paper proposes an improved real-time monocular SLAM using deep learning-based semantic segmentation.
Experiments with six video sequences demonstrate that the proposed monocular SLAM system achieves significantly more accurate trajectory tracking accuracy.
arXiv Detail & Related papers (2021-04-30T22:34:45Z) - Real-time dense 3D Reconstruction from monocular video data captured by
low-cost UAVs [0.3867363075280543]
Real-time 3D reconstruction enables fast dense mapping of the environment which benefits numerous applications, such as navigation or live evaluation of an emergency.
In contrast to most real-time capable approaches, our approach does not need an explicit depth sensor.
By exploiting the self-motion of the unmanned aerial vehicle (UAV) flying with oblique view around buildings, we estimate both camera trajectory and depth for selected images with enough novel content.
arXiv Detail & Related papers (2021-04-21T13:12:17Z) - OmniSLAM: Omnidirectional Localization and Dense Mapping for
Wide-baseline Multi-camera Systems [88.41004332322788]
We present an omnidirectional localization and dense mapping system for a wide-baseline multiview stereo setup with ultra-wide field-of-view (FOV) fisheye cameras.
For more practical and accurate reconstruction, we first introduce improved and light-weighted deep neural networks for the omnidirectional depth estimation.
We integrate our omnidirectional depth estimates into the visual odometry (VO) and add a loop closing module for global consistency.
arXiv Detail & Related papers (2020-03-18T05:52:10Z) - Redesigning SLAM for Arbitrary Multi-Camera Systems [51.81798192085111]
Adding more cameras to SLAM systems improves robustness and accuracy but complicates the design of the visual front-end significantly.
In this work, we aim at an adaptive SLAM system that works for arbitrary multi-camera setups.
We adapt a state-of-the-art visual-inertial odometry with these modifications, and experimental results show that the modified pipeline can adapt to a wide range of camera setups.
arXiv Detail & Related papers (2020-03-04T11:44:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.