Light Field Depth Estimation via Stitched Epipolar Plane Images
- URL: http://arxiv.org/abs/2203.15201v3
- Date: Thu, 7 Sep 2023 08:54:54 GMT
- Title: Light Field Depth Estimation via Stitched Epipolar Plane Images
- Authors: Ping Zhou, Langqing Shi, Xiaoyang Liu, Jing Jin, Yuting Zhang, and
Junhui Hou
- Abstract summary: We propose the concept of stitched-EPI (SEPI) to enhance slope computation.
SEPI achieves this by shifting and concatenating lines from different EPIs that correspond to the same 3D point.
We also present a depth propagation strategy aimed at improving depth estimation in texture-less regions.
- Score: 45.5616314866457
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Depth estimation is a fundamental problem in light field processing.
Epipolar-plane image (EPI)-based methods often encounter challenges such as low
accuracy in slope computation due to discretization errors and limited angular
resolution. Besides, existing methods perform well in most regions but struggle
to produce sharp edges in occluded regions and resolve ambiguities in
texture-less regions. To address these issues, we propose the concept of
stitched-EPI (SEPI) to enhance slope computation. SEPI achieves this by
shifting and concatenating lines from different EPIs that correspond to the
same 3D point. Moreover, we introduce the half-SEPI algorithm, which focuses
exclusively on the non-occluded portion of lines to handle occlusion.
Additionally, we present a depth propagation strategy aimed at improving depth
estimation in texture-less regions. This strategy involves determining the
depth of such regions by progressing from the edges towards the interior,
prioritizing accurate regions over coarse regions. Through extensive
experimental evaluations and ablation studies, we validate the effectiveness of
our proposed method. The results demonstrate its superior ability to generate
more accurate and robust depth maps across all regions compared to
state-of-the-art methods. The source code will be publicly available at
https://github.com/PingZhou-LF/Light-Field-Depth-Estimation-Based-on-Stitched-EPIs.
Related papers
- Robust Depth Enhancement via Polarization Prompt Fusion Tuning [112.88371907047396]
We present a framework that leverages polarization imaging to improve inaccurate depth measurements from various depth sensors.
Our method first adopts a learning-based strategy where a neural network is trained to estimate a dense and complete depth map from polarization data and a sensor depth map from different sensors.
To further improve the performance, we propose a Polarization Prompt Fusion Tuning (PPFT) strategy to effectively utilize RGB-based models pre-trained on large-scale datasets.
arXiv Detail & Related papers (2024-04-05T17:55:33Z) - Self-Supervised Light Field Depth Estimation Using Epipolar Plane Images [13.137957601685041]
We propose a self-supervised learning framework for light field depth estimation.
Compared with other state-of-the-art methods, the proposed method can also obtain higher quality results in real-world scenarios.
arXiv Detail & Related papers (2022-03-29T01:18:59Z) - Edge-aware Bidirectional Diffusion for Dense Depth Estimation from Light
Fields [31.941861222005603]
We present an algorithm to estimate fast and accurate depth maps from light fields via a sparse set of depth edges and gradients.
Our proposed approach is based around the idea that true depth edges are more sensitive than texture edges to local constraints.
arXiv Detail & Related papers (2021-07-07T01:26:25Z) - Occlusion-aware Unsupervised Learning of Depth from 4-D Light Fields [50.435129905215284]
We present an unsupervised learning-based depth estimation method for 4-D light field processing and analysis.
Based on the basic knowledge of the unique geometry structure of light field data, we explore the angular coherence among subsets of the light field views to estimate depth maps.
Our method can significantly shrink the performance gap between the previous unsupervised method and supervised ones, and produce depth maps with comparable accuracy to traditional methods with obviously reduced computational cost.
arXiv Detail & Related papers (2021-06-06T06:19:50Z) - Efficient Depth Completion Using Learned Bases [94.0808155168311]
We propose a new global geometry constraint for depth completion.
By assuming depth maps often lay on low dimensional subspaces, a dense depth map can be approximated by a weighted sum of full-resolution principal depth bases.
arXiv Detail & Related papers (2020-12-02T11:57:37Z) - View-consistent 4D Light Field Depth Estimation [37.04038603184669]
We propose a method to compute depth maps for every sub-aperture image in a light field in a view consistent way.
Our method precisely defines depth edges via EPIs, then we diffuse these edges spatially within the central view.
arXiv Detail & Related papers (2020-09-09T01:47:34Z) - Non-Local Spatial Propagation Network for Depth Completion [82.60915972250706]
We propose a robust and efficient end-to-end non-local spatial propagation network for depth completion.
The proposed network takes RGB and sparse depth images as inputs and estimates non-local neighbors and their affinities of each pixel.
We show that the proposed algorithm is superior to conventional algorithms in terms of depth completion accuracy and robustness to the mixed-depth problem.
arXiv Detail & Related papers (2020-07-20T12:26:51Z) - EPI-based Oriented Relation Networks for Light Field Depth Estimation [13.120247042876175]
We propose an end-to-end fully convolutional network (FCN) to estimate the depth value of the intersection point on the horizontal and vertical Epipolar Plane Image (EPI)
We present a new feature-extraction module, called Oriented Relation Module (ORM), that constructs the relationship between the line orientations.
To facilitate training, we also propose a refocusing-based data augmentation method to obtain different slopes from EPIs of the same scene point.
arXiv Detail & Related papers (2020-07-09T03:39:09Z) - Occlusion-Aware Depth Estimation with Adaptive Normal Constraints [85.44842683936471]
We present a new learning-based method for multi-frame depth estimation from a color video.
Our method outperforms the state-of-the-art in terms of depth estimation accuracy.
arXiv Detail & Related papers (2020-04-02T07:10:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.