View-consistent 4D Light Field Depth Estimation
- URL: http://arxiv.org/abs/2009.04065v1
- Date: Wed, 9 Sep 2020 01:47:34 GMT
- Title: View-consistent 4D Light Field Depth Estimation
- Authors: Numair Khan, Min H. Kim, James Tompkin
- Abstract summary: We propose a method to compute depth maps for every sub-aperture image in a light field in a view consistent way.
Our method precisely defines depth edges via EPIs, then we diffuse these edges spatially within the central view.
- Score: 37.04038603184669
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a method to compute depth maps for every sub-aperture image in a
light field in a view consistent way. Previous light field depth estimation
methods typically estimate a depth map only for the central sub-aperture view,
and struggle with view consistent estimation. Our method precisely defines
depth edges via EPIs, then we diffuse these edges spatially within the central
view. These depth estimates are then propagated to all other views in an
occlusion-aware way. Finally, disoccluded regions are completed by diffusion in
EPI space. Our method runs efficiently with respect to both other classical and
deep learning-based approaches, and achieves competitive quantitative metrics
and qualitative performance on both synthetic and real-world light fields
Related papers
- ScaleDepth: Decomposing Metric Depth Estimation into Scale Prediction and Relative Depth Estimation [62.600382533322325]
We propose a novel monocular depth estimation method called ScaleDepth.
Our method decomposes metric depth into scene scale and relative depth, and predicts them through a semantic-aware scale prediction module.
Our method achieves metric depth estimation for both indoor and outdoor scenes in a unified framework.
arXiv Detail & Related papers (2024-07-11T05:11:56Z) - Towards Multimodal Depth Estimation from Light Fields [29.26003765978794]
Current depth estimation methods only consider a single "true" depth, even when multiple objects at different depths contributed to the color of a single pixel.
We argue that this is due current methods only considering a single "true" depth, even when multiple objects at different depths contributed to the color of a single pixel.
We contribute the first "multimodal light field depth dataset" that contains the depths of all objects which contribute to the color of a pixel.
arXiv Detail & Related papers (2022-03-30T18:00:00Z) - Light Field Depth Estimation via Stitched Epipolar Plane Images [45.5616314866457]
We propose the concept of stitched-EPI (SEPI) to enhance slope computation.
SEPI achieves this by shifting and concatenating lines from different EPIs that correspond to the same 3D point.
We also present a depth propagation strategy aimed at improving depth estimation in texture-less regions.
arXiv Detail & Related papers (2022-03-29T02:43:40Z) - Edge-aware Bidirectional Diffusion for Dense Depth Estimation from Light
Fields [31.941861222005603]
We present an algorithm to estimate fast and accurate depth maps from light fields via a sparse set of depth edges and gradients.
Our proposed approach is based around the idea that true depth edges are more sensitive than texture edges to local constraints.
arXiv Detail & Related papers (2021-07-07T01:26:25Z) - Efficient Depth Completion Using Learned Bases [94.0808155168311]
We propose a new global geometry constraint for depth completion.
By assuming depth maps often lay on low dimensional subspaces, a dense depth map can be approximated by a weighted sum of full-resolution principal depth bases.
arXiv Detail & Related papers (2020-12-02T11:57:37Z) - Deep Multi-Scale Feature Learning for Defocus Blur Estimation [10.455763145066168]
This paper presents an edge-based defocus blur estimation method from a single defocused image.
We first distinguish edges that lie at depth discontinuities (called depth edges, for which the blur estimate is ambiguous) from edges that lie at approximately constant depth regions (called pattern edges, for which the blur estimate is well-defined).
We estimate the defocus blur amount at pattern edges only, and explore an scheme based on guided filters that prevents data propagation across the detected depth edges to obtain a dense blur map with well-defined object boundaries.
arXiv Detail & Related papers (2020-09-24T20:36:40Z) - Occlusion-Aware Depth Estimation with Adaptive Normal Constraints [85.44842683936471]
We present a new learning-based method for multi-frame depth estimation from a color video.
Our method outperforms the state-of-the-art in terms of depth estimation accuracy.
arXiv Detail & Related papers (2020-04-02T07:10:45Z) - Predicting Sharp and Accurate Occlusion Boundaries in Monocular Depth
Estimation Using Displacement Fields [25.3479048674598]
Current methods for depth map prediction from monocular images tend to predict smooth, poorly localized contours.
We learn to predict, given a depth map predicted by some reconstruction method, a 2D displacement field able to re-sample pixels around the occlusion boundaries into sharper reconstructions.
Our method can be applied to the output of any depth estimation method, in an end-to-end trainable fashion.
arXiv Detail & Related papers (2020-02-28T14:15:07Z) - Single Image Depth Estimation Trained via Depth from Defocus Cues [105.67073923825842]
Estimating depth from a single RGB image is a fundamental task in computer vision.
In this work, we rely, instead of different views, on depth from focus cues.
We present results that are on par with supervised methods on KITTI and Make3D datasets and outperform unsupervised learning approaches.
arXiv Detail & Related papers (2020-01-14T20:22:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.