6D Camera Relocalization in Visually Ambiguous Extreme Environments
- URL: http://arxiv.org/abs/2207.06333v1
- Date: Wed, 13 Jul 2022 16:40:02 GMT
- Title: 6D Camera Relocalization in Visually Ambiguous Extreme Environments
- Authors: Yang Zheng, Tolga Birdal, Fei Xia, Yanchao Yang, Yueqi Duan, Leonidas
J. Guibas
- Abstract summary: We propose a novel method to reliably estimate the pose of a camera given a sequence of images acquired in extreme environments such as deep seas or extraterrestrial terrains.
Our method achieves comparable performance with state-of-the-art methods on the indoor benchmark (7-Scenes dataset) using only 20% training data.
- Score: 79.68352435957266
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a novel method to reliably estimate the pose of a camera given a
sequence of images acquired in extreme environments such as deep seas or
extraterrestrial terrains. Data acquired under these challenging conditions are
corrupted by textureless surfaces, image degradation, and presence of
repetitive and highly ambiguous structures. When naively deployed, the
state-of-the-art methods can fail in those scenarios as confirmed by our
empirical analysis. In this paper, we attempt to make camera relocalization
work in these extreme situations. To this end, we propose: (i) a hierarchical
localization system, where we leverage temporal information and (ii) a novel
environment-aware image enhancement method to boost the robustness and
accuracy. Our extensive experimental results demonstrate superior performance
in favor of our method under two extreme settings: localizing an autonomous
underwater vehicle and localizing a planetary rover in a Mars-like desert. In
addition, our method achieves comparable performance with state-of-the-art
methods on the indoor benchmark (7-Scenes dataset) using only 20% training
data.
Related papers
- Pose Estimation from Camera Images for Underwater Inspection [0.0]
Visual localization is a cost-effective alternative to inertial navigation systems.
We show that machine learning-based pose estimation from images shows promise in underwater environments.
We employ novel view synthesis models to generate augmented training data, significantly enhancing pose estimation in unexplored regions.
arXiv Detail & Related papers (2024-07-24T03:00:53Z) - SG-NeRF: Neural Surface Reconstruction with Scene Graph Optimization [16.460851701725392]
We present a novel approach that optimize radiance fields with scene graphs to mitigate the influence of outlier poses.
Our method incorporates an adaptive inlier-outlier confidence estimation scheme based on scene graphs.
We also introduce an effective intersection-over-union (IoU) loss to optimize the camera pose and surface geometry.
arXiv Detail & Related papers (2024-07-17T15:50:17Z) - Cameras as Rays: Pose Estimation via Ray Diffusion [54.098613859015856]
Estimating camera poses is a fundamental task for 3D reconstruction and remains challenging given sparsely sampled views.
We propose a distributed representation of camera pose that treats a camera as a bundle of rays.
Our proposed methods, both regression- and diffusion-based, demonstrate state-of-the-art performance on camera pose estimation on CO3D.
arXiv Detail & Related papers (2024-02-22T18:59:56Z) - View Consistent Purification for Accurate Cross-View Localization [59.48131378244399]
This paper proposes a fine-grained self-localization method for outdoor robotics.
The proposed method addresses limitations in existing cross-view localization methods.
It is the first sparse visual-only method that enhances perception in dynamic environments.
arXiv Detail & Related papers (2023-08-16T02:51:52Z) - A ground-based dataset and a diffusion model for on-orbit low-light image enhancement [7.815138548685792]
We propose a dataset of the Beidou Navigation Satellite for on-orbit low-light image enhancement (LLIE)
To evenly sample poses of different orientation and distance without collision, a collision-free working space and pose stratified sampling is proposed.
To enhance the image contrast without over-exposure and blurring details, we design a fused attention to highlight the structure and dark region.
arXiv Detail & Related papers (2023-06-25T12:15:44Z) - Render-and-Compare: Cross-View 6 DoF Localization from Noisy Prior [17.08552155321949]
In this work, we propose to go beyond the traditional ground-level setting and exploit the cross-view localization from aerial to ground.
As no public dataset exists for the studied problem, we collect a new dataset that provides a variety of cross-view images from smartphones and drones.
We develop a semi-automatic system to acquire ground-truth poses for query images.
arXiv Detail & Related papers (2023-02-13T11:43:47Z) - High Dynamic Range and Super-Resolution from Raw Image Bursts [52.341483902624006]
This paper introduces the first approach to reconstruct high-resolution, high-dynamic range color images from raw photographic bursts captured by a handheld camera with exposure bracketing.
The proposed algorithm is fast, with low memory requirements compared to state-of-the-art learning-based approaches to image restoration.
Experiments demonstrate its excellent performance with super-resolution factors of up to $times 4$ on real photographs taken in the wild with hand-held cameras.
arXiv Detail & Related papers (2022-07-29T13:31:28Z) - Towards Robust Monocular Visual Odometry for Flying Robots on Planetary
Missions [49.79068659889639]
Ingenuity, that just landed on Mars, will mark the beginning of a new era of exploration unhindered by traversability.
We present an advanced robust monocular odometry algorithm that uses efficient optical flow tracking.
We also present a novel approach to estimate the current risk of scale drift based on a principal component analysis of the relative translation information matrix.
arXiv Detail & Related papers (2021-09-12T12:52:20Z) - Unlimited Resolution Image Generation with R2D2-GANs [69.90258455164513]
We present a novel simulation technique for generating high quality images of any predefined resolution.
This method can be used to synthesize sonar scans of size equivalent to those collected during a full-length mission.
The data produced is continuous, realistically-looking, and can also be generated at least two times faster than the real speed of acquisition.
arXiv Detail & Related papers (2020-03-02T17:49:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.