Range-Agnostic Multi-View Depth Estimation With Keyframe Selection
- URL: http://arxiv.org/abs/2401.14401v1
- Date: Thu, 25 Jan 2024 18:59:42 GMT
- Title: Range-Agnostic Multi-View Depth Estimation With Keyframe Selection
- Authors: Andrea Conti, Matteo Poggi, Valerio Cambareri, Stefano Mattoccia
- Abstract summary: Methods for 3D reconstruction from posed frames require prior knowledge about the scene metric range.
RAMDepth is an efficient and purely 2D framework that reverses the depth estimation and matching steps order.
- Score: 33.99466211478322
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Methods for 3D reconstruction from posed frames require prior knowledge about
the scene metric range, usually to recover matching cues along the epipolar
lines and narrow the search range. However, such prior might not be directly
available or estimated inaccurately in real scenarios -- e.g., outdoor 3D
reconstruction from video sequences -- therefore heavily hampering performance.
In this paper, we focus on multi-view depth estimation without requiring prior
knowledge about the metric range of the scene by proposing RAMDepth, an
efficient and purely 2D framework that reverses the depth estimation and
matching steps order. Moreover, we demonstrate the capability of our framework
to provide rich insights about the quality of the views used for prediction.
Additional material can be found on our project page
https://andreaconti.github.io/projects/range_agnostic_multi_view_depth.
Related papers
- KRONC: Keypoint-based Robust Camera Optimization for 3D Car Reconstruction [58.04846444985808]
This paper introduces KRONC, a novel approach aimed at inferring view poses by leveraging prior knowledge about the object to reconstruct and its representation through semantic keypoints.
With a focus on vehicle scenes, KRONC is able to estimate the position of the views as a solution to a light optimization problem targeting the convergence of keypoints' back-projections to a singular point.
arXiv Detail & Related papers (2024-09-09T08:08:05Z) - DoubleTake: Geometry Guided Depth Estimation [17.464549832122714]
Estimating depth from a sequence of posed RGB images is a fundamental computer vision task.
We introduce a reconstruction which combines volume features with a hint of the prior geometry, rendered as a depth map from the current camera location.
We demonstrate that our method can run at interactive speeds, state-of-the-art estimates of depth and 3D scene in both offline and incremental evaluation scenarios.
arXiv Detail & Related papers (2024-06-26T14:29:05Z) - Calibrating Panoramic Depth Estimation for Practical Localization and
Mapping [20.621442016969976]
The absolute depth values of surrounding environments provide crucial cues for various assistive technologies, such as localization, navigation, and 3D structure estimation.
We propose that accurate depth estimated from panoramic images can serve as a powerful and light-weight input for a wide range of downstream tasks requiring 3D information.
arXiv Detail & Related papers (2023-08-27T04:50:05Z) - FrozenRecon: Pose-free 3D Scene Reconstruction with Frozen Depth Models [67.96827539201071]
We propose a novel test-time optimization approach for 3D scene reconstruction.
Our method achieves state-of-the-art cross-dataset reconstruction on five zero-shot testing datasets.
arXiv Detail & Related papers (2023-08-10T17:55:02Z) - Blur aware metric depth estimation with multi-focus plenoptic cameras [8.508198765617196]
We present a new metric depth estimation algorithm using only raw images from a multi-focus plenoptic camera.
The proposed approach is especially suited for the multi-focus configuration where several micro-lenses with different focal lengths are used.
arXiv Detail & Related papers (2023-08-08T13:38:50Z) - CMR3D: Contextualized Multi-Stage Refinement for 3D Object Detection [57.44434974289945]
We propose Contextualized Multi-Stage Refinement for 3D Object Detection (CMR3D) framework.
Our framework takes a 3D scene as input and strives to explicitly integrate useful contextual information of the scene.
In addition to 3D object detection, we investigate the effectiveness of our framework for the problem of 3D object counting.
arXiv Detail & Related papers (2022-09-13T05:26:09Z) - How Far Can I Go ? : A Self-Supervised Approach for Deterministic Video
Depth Forecasting [23.134156184783357]
We present a novel self-supervised method to anticipate the depth estimate for a future, unobserved real-world urban scene.
This work is the first to explore self-supervised learning for estimation of monocular depth of future unobserved frames of a video.
arXiv Detail & Related papers (2022-07-01T15:51:17Z) - Towards 3D Scene Reconstruction from Locally Scale-Aligned Monocular
Video Depth [90.33296913575818]
In some video-based scenarios such as video depth estimation and 3D scene reconstruction from a video, the unknown scale and shift residing in per-frame prediction may cause the depth inconsistency.
We propose a locally weighted linear regression method to recover the scale and shift with very sparse anchor points.
Our method can boost the performance of existing state-of-the-art approaches by 50% at most over several zero-shot benchmarks.
arXiv Detail & Related papers (2022-02-03T08:52:54Z) - Soft Expectation and Deep Maximization for Image Feature Detection [68.8204255655161]
We propose SEDM, an iterative semi-supervised learning process that flips the question and first looks for repeatable 3D points, then trains a detector to localize them in image space.
Our results show that this new model trained using SEDM is able to better localize the underlying 3D points in a scene.
arXiv Detail & Related papers (2021-04-21T00:35:32Z) - Occlusion-Aware Depth Estimation with Adaptive Normal Constraints [85.44842683936471]
We present a new learning-based method for multi-frame depth estimation from a color video.
Our method outperforms the state-of-the-art in terms of depth estimation accuracy.
arXiv Detail & Related papers (2020-04-02T07:10:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.