Learning Light Field Angular Super-Resolution via a Geometry-Aware
Network
- URL: http://arxiv.org/abs/2002.11263v1
- Date: Wed, 26 Feb 2020 02:36:57 GMT
- Title: Learning Light Field Angular Super-Resolution via a Geometry-Aware
Network
- Authors: Jing Jin and Junhui Hou and Hui Yuan and Sam Kwong
- Abstract summary: We propose an end-to-end learning-based approach aiming at angularly super-resolving a sparsely-sampled light field with a large baseline.
Our method improves the PSNR of the second best method up to 2 dB in average, while saves the execution time 48$times$.
- Score: 101.59693839475783
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The acquisition of light field images with high angular resolution is costly.
Although many methods have been proposed to improve the angular resolution of a
sparsely-sampled light field, they always focus on the light field with a small
baseline, which is captured by a consumer light field camera. By making full
use of the intrinsic \textit{geometry} information of light fields, in this
paper we propose an end-to-end learning-based approach aiming at angularly
super-resolving a sparsely-sampled light field with a large baseline. Our model
consists of two learnable modules and a physically-based module. Specifically,
it includes a depth estimation module for explicitly modeling the scene
geometry, a physically-based warping for novel views synthesis, and a light
field blending module specifically designed for light field reconstruction.
Moreover, we introduce a novel loss function to promote the preservation of the
light field parallax structure. Experimental results over various light field
datasets including large baseline light field images demonstrate the
significant superiority of our method when compared with state-of-the-art ones,
i.e., our method improves the PSNR of the second best method up to 2 dB in
average, while saves the execution time 48$\times$. In addition, our method
preserves the light field parallax structure better.
Related papers
- Light Field Spatial Resolution Enhancement Framework [0.24578723416255746]
We propose a novel light field framework for resolution enhancement.
The first module generates a high-resolution, all-in-focus image.
The second module, a texture transformer network, enhances the resolution of each light field perspective independently.
arXiv Detail & Related papers (2024-05-05T02:07:10Z) - Unsupervised Learning of High-resolution Light Field Imaging via Beam
Splitter-based Hybrid Lenses [42.5604477188514]
We design a beam splitter-based hybrid light field imaging prototype to record 4D light field image and high-resolution 2D image simultaneously.
The 2D image could be considered as the high-resolution ground truth corresponding to the low-resolution central sub-aperture image of 4D light field image.
We propose an unsupervised learning-based super-resolution framework with the hybrid light field dataset.
arXiv Detail & Related papers (2024-02-29T10:30:02Z) - Learning based Deep Disentangling Light Field Reconstruction and
Disparity Estimation Application [1.5603081929496316]
We propose a Deep Disentangling Mechanism, which inherits the principle of the light field disentangling mechanism and adds advanced network structure.
We design a light-field reconstruction network (i.e., DDASR) on the basis of the Deep Disentangling Mechanism, and achieve SOTA performance in the experiments.
arXiv Detail & Related papers (2023-11-14T12:48:17Z) - Enhancing Low-light Light Field Images with A Deep Compensation Unfolding Network [52.77569396659629]
This paper presents the deep compensation network unfolding (DCUNet) for restoring light field (LF) images captured under low-light conditions.
The framework uses the intermediate enhanced result to estimate the illumination map, which is then employed in the unfolding process to produce a new enhanced result.
To properly leverage the unique characteristics of LF images, this paper proposes a pseudo-explicit feature interaction module.
arXiv Detail & Related papers (2023-08-10T07:53:06Z) - OAFuser: Towards Omni-Aperture Fusion for Light Field Semantic Segmentation [48.828453331724965]
We propose an Omni-Aperture Fusion model (OAFuser) to extract angular information from sub-aperture images to generate semantically consistent results.
The proposed OAFuser achieves state-of-the-art performance on four UrbanLF datasets in terms of all evaluation metrics.
arXiv Detail & Related papers (2023-07-28T14:43:27Z) - Learning Texture Transformer Network for Light Field Super-Resolution [1.5469452301122173]
We propose a method to improve the spatial resolution of light field images with the aid of the Transformer Network (TTSR)
The results demonstrate around 4 dB to 6 dB PSNR gain over a bicubically resized light field image.
arXiv Detail & Related papers (2022-10-09T15:16:07Z) - Light Field Reconstruction Using Convolutional Network on EPI and
Extended Applications [78.63280020581662]
A novel convolutional neural network (CNN)-based framework is developed for light field reconstruction from a sparse set of views.
We demonstrate the high performance and robustness of the proposed framework compared with state-of-the-art algorithms.
arXiv Detail & Related papers (2021-03-24T08:16:32Z) - A learning-based view extrapolation method for axial super-resolution [52.748944517480155]
Axial light field resolution refers to the ability to distinguish features at different depths by refocusing.
We propose a learning-based method to extrapolate novel views from axial volumes of sheared epipolar plane images.
arXiv Detail & Related papers (2021-03-11T07:22:13Z) - Light Field Spatial Super-resolution via Deep Combinatorial Geometry
Embedding and Structural Consistency Regularization [99.96632216070718]
Light field (LF) images acquired by hand-held devices usually suffer from low spatial resolution.
The high-dimensional spatiality characteristic and complex geometrical structure of LF images make the problem more challenging than traditional single-image SR.
We propose a novel learning-based LF framework, in which each view of an LF image is first individually super-resolved.
arXiv Detail & Related papers (2020-04-05T14:39:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.