Monocular Direct Sparse Localization in a Prior 3D Surfel Map
- URL: http://arxiv.org/abs/2002.09923v1
- Date: Sun, 23 Feb 2020 15:29:38 GMT
- Title: Monocular Direct Sparse Localization in a Prior 3D Surfel Map
- Authors: Haoyang Ye, Huaiyang Huang and Ming Liu
- Abstract summary: We introduce an approach to tracking the pose of a monocular camera in a prior surfel map.
The tracked points with and without the global planar information involve both global and local constraints of frames to the system.
Our approach formulates all constraints in the form of direct photometric errors within a local window of the frames.
- Score: 18.567015858362208
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we introduce an approach to tracking the pose of a monocular
camera in a prior surfel map. By rendering vertex and normal maps from the
prior surfel map, the global planar information for the sparse tracked points
in the image frame is obtained. The tracked points with and without the global
planar information involve both global and local constraints of frames to the
system. Our approach formulates all constraints in the form of direct
photometric errors within a local window of the frames. The final optimization
utilizes these constraints to provide the accurate estimation of global 6-DoF
camera poses with the absolute scale. The extensive simulation and real-world
experiments demonstrate that our monocular method can provide accurate camera
localization results under various conditions.
Related papers
- Cameras as Rays: Pose Estimation via Ray Diffusion [54.098613859015856]
Estimating camera poses is a fundamental task for 3D reconstruction and remains challenging given sparsely sampled views.
We propose a distributed representation of camera pose that treats a camera as a bundle of rays.
Our proposed methods, both regression- and diffusion-based, demonstrate state-of-the-art performance on camera pose estimation on CO3D.
arXiv Detail & Related papers (2024-02-22T18:59:56Z) - Global Localization: Utilizing Relative Spatio-Temporal Geometric
Constraints from Adjacent and Distant Cameras [7.836516315882875]
Re-localizing a camera from a single image in a previously mapped area is vital for many computer vision applications in robotics and augmented/virtual reality.
We propose to leverage a novel network of relative spatial and temporal geometric constraints to guide the training of a Deep Network for localization.
We show that our method, through these constraints, is capable of learning to localize when little or very sparse ground-truth 3D coordinates are available.
arXiv Detail & Related papers (2023-12-01T11:03:07Z) - NEWTON: Neural View-Centric Mapping for On-the-Fly Large-Scale SLAM [51.21564182169607]
Newton is a view-centric mapping method that dynamically constructs neural fields based on run-time observation.
Our method enables camera pose updates using loop closures and scene boundary updates by representing the scene with multiple neural fields.
The experimental results demonstrate the superior performance of our method over existing world-centric neural field-based SLAM systems.
arXiv Detail & Related papers (2023-03-23T20:22:01Z) - TANDEM: Tracking and Dense Mapping in Real-time using Deep Multi-view
Stereo [55.30992853477754]
We present TANDEM, a real-time monocular tracking and dense framework.
For pose estimation, TANDEM performs photometric bundle adjustment based on a sliding window of alignments.
TANDEM shows state-of-the-art real-time 3D reconstruction performance.
arXiv Detail & Related papers (2021-11-14T19:01:02Z) - Direct and Sparse Deformable Tracking [4.874780144224057]
We introduce a novel deformable camera tracking method with a local deformation model for each point.
Thanks to a direct photometric error cost function, we can track the position and orientation of the surfel without an explicit global deformation model.
arXiv Detail & Related papers (2021-09-15T15:28:10Z) - Estimating Egocentric 3D Human Pose in Global Space [70.7272154474722]
We present a new method for egocentric global 3D body pose estimation using a single-mounted fisheye camera.
Our approach outperforms state-of-the-art methods both quantitatively and qualitatively.
arXiv Detail & Related papers (2021-04-27T20:01:57Z) - 3D Surfel Map-Aided Visual Relocalization with Learned Descriptors [15.608529165143718]
We introduce a method for visual relocalization using the geometric information from a 3D surfel map.
A visual database is first built by global indices from the 3D surfel map rendering, which provides associations between image points and 3D surfels.
A hierarchical camera relocalization algorithm then utilizes the visual database to estimate 6-DoF camera poses.
arXiv Detail & Related papers (2021-04-08T15:59:57Z) - Calibrated and Partially Calibrated Semi-Generalized Homographies [65.29477277713205]
We propose the first minimal solutions for estimating the semi-generalized homography given a perspective and a generalized camera.
The proposed solvers are stable and efficient as demonstrated by a number of synthetic and real-world experiments.
arXiv Detail & Related papers (2021-03-11T08:56:24Z) - Single View Metrology in the Wild [94.7005246862618]
We present a novel approach to single view metrology that can recover the absolute scale of a scene represented by 3D heights of objects or camera height above the ground.
Our method relies on data-driven priors learned by a deep network specifically designed to imbibe weakly supervised constraints from the interplay of the unknown camera with 3D entities such as object heights.
We demonstrate state-of-the-art qualitative and quantitative results on several datasets as well as applications including virtual object insertion.
arXiv Detail & Related papers (2020-07-18T22:31:33Z) - GPO: Global Plane Optimization for Fast and Accurate Monocular SLAM
Initialization [22.847353792031488]
The algorithm starts by homography estimation in a sliding window.
The proposed method fully exploits the plane information from multiple frames and avoids the ambiguities in homography decomposition.
Experimental results show that our method outperforms the fine-tuned baselines in both accuracy and real-time.
arXiv Detail & Related papers (2020-04-25T03:57:50Z) - Deep-Geometric 6 DoF Localization from a Single Image in Topo-metric
Maps [39.05304338751328]
We describe a Deep-Geometric Localizer that is able to estimate the full 6 Degree of Freedom (DoF) global pose of the camera from a single image.
Our method divorces the mapping and the localization algorithms (stereo and mono) and allows accurate 6 DoF pose estimation in a previously mapped environment.
With potential VR/AR and localization applications in single camera devices such as mobile phones and drones, our hybrid algorithm compares favourably with the fully Deep-Learning based Pose-Net.
arXiv Detail & Related papers (2020-02-04T10:11:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.