Occlusion-aware Unsupervised Learning of Depth from 4-D Light Fields
- URL: http://arxiv.org/abs/2106.03043v1
- Date: Sun, 6 Jun 2021 06:19:50 GMT
- Title: Occlusion-aware Unsupervised Learning of Depth from 4-D Light Fields
- Authors: Jing Jin and Junhui Hou
- Abstract summary: We present an unsupervised learning-based depth estimation method for 4-D light field processing and analysis.
Based on the basic knowledge of the unique geometry structure of light field data, we explore the angular coherence among subsets of the light field views to estimate depth maps.
Our method can significantly shrink the performance gap between the previous unsupervised method and supervised ones, and produce depth maps with comparable accuracy to traditional methods with obviously reduced computational cost.
- Score: 50.435129905215284
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Depth estimation is a fundamental issue in 4-D light field processing and
analysis. Although recent supervised learning-based light field depth
estimation methods have significantly improved the accuracy and efficiency of
traditional optimization-based ones, these methods rely on the training over
light field data with ground-truth depth maps which are challenging to obtain
or even unavailable for real-world light field data. Besides, due to the
inevitable gap (or domain difference) between real-world and synthetic data,
they may suffer from serious performance degradation when generalizing the
models trained with synthetic data to real-world data. By contrast, we propose
an unsupervised learning-based method, which does not require ground-truth
depth as supervision during training. Specifically, based on the basic
knowledge of the unique geometry structure of light field data, we present an
occlusion-aware strategy to improve the accuracy on occlusion areas, in which
we explore the angular coherence among subsets of the light field views to
estimate initial depth maps, and utilize a constrained unsupervised loss to
learn their corresponding reliability for final depth prediction. Additionally,
we adopt a multi-scale network with a weighted smoothness loss to handle the
textureless areas. Experimental results on synthetic data show that our method
can significantly shrink the performance gap between the previous unsupervised
method and supervised ones, and produce depth maps with comparable accuracy to
traditional methods with obviously reduced computational cost. Moreover,
experiments on real-world datasets show that our method can avoid the domain
shift problem presented in supervised methods, demonstrating the great
potential of our method.
Related papers
- Uncertainty-guided Optimal Transport in Depth Supervised Sparse-View 3D Gaussian [49.21866794516328]
3D Gaussian splatting has demonstrated impressive performance in real-time novel view synthesis.
Previous approaches have incorporated depth supervision into the training of 3D Gaussians to mitigate overfitting.
We introduce a novel method to supervise the depth distribution of 3D Gaussians, utilizing depth priors with integrated uncertainty estimates.
arXiv Detail & Related papers (2024-05-30T03:18:30Z) - Robust Depth Enhancement via Polarization Prompt Fusion Tuning [112.88371907047396]
We present a framework that leverages polarization imaging to improve inaccurate depth measurements from various depth sensors.
Our method first adopts a learning-based strategy where a neural network is trained to estimate a dense and complete depth map from polarization data and a sensor depth map from different sensors.
To further improve the performance, we propose a Polarization Prompt Fusion Tuning (PPFT) strategy to effectively utilize RGB-based models pre-trained on large-scale datasets.
arXiv Detail & Related papers (2024-04-05T17:55:33Z) - Minimizing the Accumulated Trajectory Error to Improve Dataset
Distillation [151.70234052015948]
We propose a novel approach that encourages the optimization algorithm to seek a flat trajectory.
We show that the weights trained on synthetic data are robust against the accumulated errors perturbations with the regularization towards the flat trajectory.
Our method, called Flat Trajectory Distillation (FTD), is shown to boost the performance of gradient-matching methods by up to 4.7%.
arXiv Detail & Related papers (2022-11-20T15:49:11Z) - Self-Supervised Light Field Depth Estimation Using Epipolar Plane Images [13.137957601685041]
We propose a self-supervised learning framework for light field depth estimation.
Compared with other state-of-the-art methods, the proposed method can also obtain higher quality results in real-world scenarios.
arXiv Detail & Related papers (2022-03-29T01:18:59Z) - OPAL: Occlusion Pattern Aware Loss for Unsupervised Light Field
Disparity Estimation [22.389903710616508]
unsupervised methods can achieve comparable accuracy, but much higher generalization capacity and efficiency than supervised methods.
We present OPAL, which successfully extracts and encodes the general occlusion patterns inherent in the light field for loss calculation.
arXiv Detail & Related papers (2022-03-04T10:32:18Z) - On the Sins of Image Synthesis Loss for Self-supervised Depth Estimation [60.780823530087446]
We show that improvements in image synthesis do not necessitate improvement in depth estimation.
We attribute this diverging phenomenon to aleatoric uncertainties, which originate from data.
This observed divergence has not been previously reported or studied in depth.
arXiv Detail & Related papers (2021-09-13T17:57:24Z) - Self-Guided Instance-Aware Network for Depth Completion and Enhancement [6.319531161477912]
Existing methods directly interpolate the missing depth measurements based on pixel-wise image content and the corresponding neighboring depth values.
We propose a novel self-guided instance-aware network (SG-IANet) that utilize self-guided mechanism to extract instance-level features that is needed for depth restoration.
arXiv Detail & Related papers (2021-05-25T19:41:38Z) - Towards Unpaired Depth Enhancement and Super-Resolution in the Wild [121.96527719530305]
State-of-the-art data-driven methods of depth map super-resolution rely on registered pairs of low- and high-resolution depth maps of the same scenes.
We consider an approach to depth map enhancement based on learning from unpaired data.
arXiv Detail & Related papers (2021-05-25T16:19:16Z) - Domain Adaptive Monocular Depth Estimation With Semantic Information [13.387521845596149]
We propose an adversarial training model that leverages semantic information to narrow the domain gap.
The proposed compact model achieves state-of-the-art performance comparable to complex latest models.
arXiv Detail & Related papers (2021-04-12T18:50:41Z) - Improving Monocular Depth Estimation by Leveraging Structural Awareness
and Complementary Datasets [21.703238902823937]
We propose a structure-aware neural network with spatial attention blocks to exploit the spatial relationship of visual features.
Second, we introduce a global focal relative loss for uniform point pairs to enhance spatial constraint in the prediction.
Third, based on analysis of failure cases for prior methods, we collect a new Hard Case (HC) Depth dataset of challenging scenes.
arXiv Detail & Related papers (2020-07-22T08:21:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.