Light Field Reconstruction Using Convolutional Network on EPI and
Extended Applications
- URL: http://arxiv.org/abs/2103.13043v1
- Date: Wed, 24 Mar 2021 08:16:32 GMT
- Title: Light Field Reconstruction Using Convolutional Network on EPI and
Extended Applications
- Authors: Gaochang Wu, Yebin Liu, Lu Fang, Qionghai Dai, Tianyou Chai
- Abstract summary: A novel convolutional neural network (CNN)-based framework is developed for light field reconstruction from a sparse set of views.
We demonstrate the high performance and robustness of the proposed framework compared with state-of-the-art algorithms.
- Score: 78.63280020581662
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, a novel convolutional neural network (CNN)-based framework is
developed for light field reconstruction from a sparse set of views. We
indicate that the reconstruction can be efficiently modeled as angular
restoration on an epipolar plane image (EPI). The main problem in direct
reconstruction on the EPI involves an information asymmetry between the spatial
and angular dimensions, where the detailed portion in the angular dimensions is
damaged by undersampling. Directly upsampling or super-resolving the light
field in the angular dimensions causes ghosting effects. To suppress these
ghosting effects, we contribute a novel "blur-restoration-deblur" framework.
First, the "blur" step is applied to extract the low-frequency components of
the light field in the spatial dimensions by convolving each EPI slice with a
selected blur kernel. Then, the "restoration" step is implemented by a CNN,
which is trained to restore the angular details of the EPI. Finally, we use a
non-blind "deblur" operation to recover the spatial high frequencies suppressed
by the EPI blur. We evaluate our approach on several datasets, including
synthetic scenes, real-world scenes and challenging microscope light field
data. We demonstrate the high performance and robustness of the proposed
framework compared with state-of-the-art algorithms. We further show extended
applications, including depth enhancement and interpolation for unstructured
input. More importantly, a novel rendering approach is presented by combining
the proposed framework and depth information to handle large disparities.
Related papers
- RISE-SDF: a Relightable Information-Shared Signed Distance Field for Glossy Object Inverse Rendering [26.988572852463815]
In this paper, we propose a novel end-to-end relightable neural inverse rendering system.
Our algorithm achieves state-of-the-art performance in inverse rendering and relighting.
Our experiments demonstrate that our algorithm achieves state-of-the-art performance in inverse rendering and relighting.
arXiv Detail & Related papers (2024-09-30T09:42:10Z) - Enhancing Low-light Light Field Images with A Deep Compensation Unfolding Network [52.77569396659629]
This paper presents the deep compensation network unfolding (DCUNet) for restoring light field (LF) images captured under low-light conditions.
The framework uses the intermediate enhanced result to estimate the illumination map, which is then employed in the unfolding process to produce a new enhanced result.
To properly leverage the unique characteristics of LF images, this paper proposes a pseudo-explicit feature interaction module.
arXiv Detail & Related papers (2023-08-10T07:53:06Z) - A Deep Learning Approach for SAR Tomographic Imaging of Forested Areas [10.477070348391079]
We show that light-weight neural networks can be trained to perform the tomographic inversion with a single feed-forward pass.
We train our encoder-decoder network using simulated data and validate our technique on real L-band and P-band data.
arXiv Detail & Related papers (2023-01-20T14:34:03Z) - Geo-NI: Geometry-aware Neural Interpolation for Light Field Rendering [57.775678643512435]
We present a Geometry-aware Neural Interpolation (Geo-NI) framework for light field rendering.
By combining the superiorities of NI and DIBR, the proposed Geo-NI is able to render views with large disparity.
arXiv Detail & Related papers (2022-06-20T12:25:34Z) - InfoNeRF: Ray Entropy Minimization for Few-Shot Neural Volume Rendering [55.70938412352287]
We present an information-theoretic regularization technique for few-shot novel view synthesis based on neural implicit representation.
The proposed approach minimizes potential reconstruction inconsistency that happens due to insufficient viewpoints.
We achieve consistently improved performance compared to existing neural view synthesis methods by large margins on multiple standard benchmarks.
arXiv Detail & Related papers (2021-12-31T11:56:01Z) - Learning Neural Light Fields with Ray-Space Embedding Networks [51.88457861982689]
We propose a novel neural light field representation that is compact and directly predicts integrated radiance along rays.
Our method achieves state-of-the-art quality on dense forward-facing datasets such as the Stanford Light Field dataset.
arXiv Detail & Related papers (2021-12-02T18:59:51Z) - Deep Sparse Light Field Refocusing [35.796798137910066]
Current methods require for this purpose a dense field of angle views.
We present a novel implementation of digital refocusing based on sparse angular information using neural networks.
arXiv Detail & Related papers (2020-09-05T18:34:55Z) - Spatial-Angular Attention Network for Light Field Reconstruction [64.27343801968226]
We propose a spatial-angular attention network to perceive correspondences in the light field non-locally.
Motivated by the non-local attention mechanism, a spatial-angular attention module is introduced to compute the responses from all the positions in the epipolar plane for each pixel in the light field.
We then propose a multi-scale reconstruction structure to efficiently implement the non-local attention in the low spatial scale.
arXiv Detail & Related papers (2020-07-05T06:55:29Z) - High-Order Residual Network for Light Field Super-Resolution [39.93400777363467]
Plenoptic cameras usually sacrifice the spatial resolution of their SAIss to acquire information from different viewpoints.
We propose a novel high-order residual network to learn the geometric features hierarchically from the light field for reconstruction.
Our approach enables high-quality reconstruction even in challenging regions and outperforms state-of-the-art single image or LF reconstruction methods with both quantitative measurements and visual evaluation.
arXiv Detail & Related papers (2020-03-29T18:06:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.