Relative Pose from Deep Learned Depth and a Single Affine Correspondence
- URL: http://arxiv.org/abs/2007.10082v1
- Date: Mon, 20 Jul 2020 13:24:28 GMT
- Title: Relative Pose from Deep Learned Depth and a Single Affine Correspondence
- Authors: Ivan Eichhardt, Daniel Barath
- Abstract summary: We propose a new approach for combining deep-learned non-metric monocular depth with affine correspondences.
Considering the depth information and affine features, two new constraints on the camera pose are derived.
The proposed solver is usable within 1-point RANSAC approaches.
- Score: 37.04516812025321
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a new approach for combining deep-learned non-metric monocular
depth with affine correspondences (ACs) to estimate the relative pose of two
calibrated cameras from a single correspondence. Considering the depth
information and affine features, two new constraints on the camera pose are
derived. The proposed solver is usable within 1-point RANSAC approaches. Thus,
the processing time of the robust estimation is linear in the number of
correspondences and, therefore, orders of magnitude faster than by using
traditional approaches. The proposed 1AC+D solver is tested both on synthetic
data and on 110395 publicly available real image pairs where we used an
off-the-shelf monocular depth network to provide up-to-scale depth per pixel.
The proposed 1AC+D leads to similar accuracy as traditional approaches while
being significantly faster. When solving large-scale problems, e.g., pose-graph
initialization for Structure-from-Motion (SfM) pipelines, the overhead of
obtaining ACs and monocular depth is negligible compared to the speed-up gained
in the pairwise geometric verification, i.e., relative pose estimation. This is
demonstrated on scenes from the 1DSfM dataset using a state-of-the-art global
SfM algorithm. Source code: https://github.com/eivan/one-ac-pose
Related papers
- A Construct-Optimize Approach to Sparse View Synthesis without Camera Pose [44.13819148680788]
We develop a novel construct-and-optimize method for sparse view synthesis without camera poses.
Specifically, we construct a solution by using monocular depth and projecting pixels back into the 3D world.
We demonstrate results on the Tanks and Temples and Static Hikes datasets with as few as three widely-spaced views.
arXiv Detail & Related papers (2024-05-06T17:36:44Z) - DVMNet: Computing Relative Pose for Unseen Objects Beyond Hypotheses [59.51874686414509]
Current approaches approximate the continuous pose representation with a large number of discrete pose hypotheses.
We present a Deep Voxel Matching Network (DVMNet) that eliminates the need for pose hypotheses and computes the relative object pose in a single pass.
Our method delivers more accurate relative pose estimates for novel objects at a lower computational cost compared to state-of-the-art methods.
arXiv Detail & Related papers (2024-03-20T15:41:32Z) - Affine Correspondences between Multi-Camera Systems for Relative Pose
Estimation [11.282703971318934]
We present a novel method to compute the relative pose of multi-camera systems using two affine correspondences (ACs)
This paper shows that the 6DOF relative pose estimation problem using ACs permits a feasible minimal solution.
Experiments on both virtual and real multi-camera systems prove that the proposed solvers are more efficient than the state-of-the-art algorithms.
arXiv Detail & Related papers (2023-06-22T15:52:48Z) - Single Image Depth Prediction Made Better: A Multivariate Gaussian Take [163.14849753700682]
We introduce an approach that performs continuous modeling of per-pixel depth.
Our method's accuracy (named MG) is among the top on the KITTI depth-prediction benchmark leaderboard.
arXiv Detail & Related papers (2023-03-31T16:01:03Z) - Deep Two-View Structure-from-Motion Revisited [83.93809929963969]
Two-view structure-from-motion (SfM) is the cornerstone of 3D reconstruction and visual SLAM.
We propose to revisit the problem of deep two-view SfM by leveraging the well-posedness of the classic pipeline.
Our method consists of 1) an optical flow estimation network that predicts dense correspondences between two frames; 2) a normalized pose estimation module that computes relative camera poses from the 2D optical flow correspondences, and 3) a scale-invariant depth estimation network that leverages epipolar geometry to reduce the search space, refine the dense correspondences, and estimate relative depth maps.
arXiv Detail & Related papers (2021-04-01T15:31:20Z) - Monocular Depth Parameterizing Networks [15.791732557395552]
We propose a network structure that provides a parameterization of a set of depth maps with feasible shapes.
This allows us to search the shapes for a photo consistent solution with respect to other images.
Our experimental evaluation shows that our method generates more accurate depth maps and generalizes better than competing state-of-the-art approaches.
arXiv Detail & Related papers (2020-12-21T13:02:41Z) - Robust Consistent Video Depth Estimation [65.53308117778361]
We present an algorithm for estimating consistent dense depth maps and camera poses from a monocular video.
Our algorithm combines two complementary techniques: (1) flexible deformation-splines for low-frequency large-scale alignment and (2) geometry-aware depth filtering for high-frequency alignment of fine depth details.
In contrast to prior approaches, our method does not require camera poses as input and achieves robust reconstruction for challenging hand-held cell phone captures containing a significant amount of noise, shake, motion blur, and rolling shutter deformations.
arXiv Detail & Related papers (2020-12-10T18:59:48Z) - Efficient Initial Pose-graph Generation for Global SfM [56.38930515826556]
We propose ways to speed up the initial pose-graph generation for global Structure-from-Motion algorithms.
The algorithms are tested on 402130 image pairs from the 1DSfM dataset.
arXiv Detail & Related papers (2020-11-24T09:32:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.