Deep Sparse Light Field Refocusing
- URL: http://arxiv.org/abs/2009.02582v1
- Date: Sat, 5 Sep 2020 18:34:55 GMT
- Title: Deep Sparse Light Field Refocusing
- Authors: Shachar Ben Dayan, David Mendlovic and Raja Giryes
- Abstract summary: Current methods require for this purpose a dense field of angle views.
We present a novel implementation of digital refocusing based on sparse angular information using neural networks.
- Score: 35.796798137910066
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Light field photography enables to record 4D images, containing angular
information alongside spatial information of the scene. One of the important
applications of light field imaging is post-capture refocusing. Current methods
require for this purpose a dense field of angle views; those can be acquired
with a micro-lens system or with a compressive system. Both techniques have
major drawbacks to consider, including bulky structures and angular-spatial
resolution trade-off. We present a novel implementation of digital refocusing
based on sparse angular information using neural networks. This allows
recording high spatial resolution in favor of the angular resolution, thus,
enabling to design compact and simple devices with improved hardware as well as
better performance of compressive systems. We use a novel convolutional neural
network whose relatively small structure enables fast reconstruction with low
memory consumption. Moreover, it allows handling without re-training various
refocusing ranges and noise levels. Results show major improvement compared to
existing methods.
Related papers
- Learning based Deep Disentangling Light Field Reconstruction and
Disparity Estimation Application [1.5603081929496316]
We propose a Deep Disentangling Mechanism, which inherits the principle of the light field disentangling mechanism and adds advanced network structure.
We design a light-field reconstruction network (i.e., DDASR) on the basis of the Deep Disentangling Mechanism, and achieve SOTA performance in the experiments.
arXiv Detail & Related papers (2023-11-14T12:48:17Z) - Depth Monocular Estimation with Attention-based Encoder-Decoder Network
from Single Image [7.753378095194288]
Vision-based approaches have recently received much attention and can overcome these drawbacks.
In this work, we explore an extreme scenario in vision-based settings: estimate a depth map from one monocular image severely plagued by grid artifacts and blurry edges.
Our novel approach can find the focus of current image with minimal overhead and avoid losses of depth features.
arXiv Detail & Related papers (2022-10-24T23:01:25Z) - InfoNeRF: Ray Entropy Minimization for Few-Shot Neural Volume Rendering [55.70938412352287]
We present an information-theoretic regularization technique for few-shot novel view synthesis based on neural implicit representation.
The proposed approach minimizes potential reconstruction inconsistency that happens due to insufficient viewpoints.
We achieve consistently improved performance compared to existing neural view synthesis methods by large margins on multiple standard benchmarks.
arXiv Detail & Related papers (2021-12-31T11:56:01Z) - Learning Neural Light Fields with Ray-Space Embedding Networks [51.88457861982689]
We propose a novel neural light field representation that is compact and directly predicts integrated radiance along rays.
Our method achieves state-of-the-art quality on dense forward-facing datasets such as the Stanford Light Field dataset.
arXiv Detail & Related papers (2021-12-02T18:59:51Z) - Light Field Implicit Representation for Flexible Resolution
Reconstruction [9.173467982128514]
We propose an implicit representation model for 4D light fields conditioned on a sparse set of input views.
Our model is trained to output the light field values for a continuous range of coordinates.
Experiments show that our method achieves state-of-the-art performance for the synthesis of view while being computationally fast.
arXiv Detail & Related papers (2021-11-30T23:59:02Z) - Light Field Reconstruction Using Convolutional Network on EPI and
Extended Applications [78.63280020581662]
A novel convolutional neural network (CNN)-based framework is developed for light field reconstruction from a sparse set of views.
We demonstrate the high performance and robustness of the proposed framework compared with state-of-the-art algorithms.
arXiv Detail & Related papers (2021-03-24T08:16:32Z) - High Quality Remote Sensing Image Super-Resolution Using Deep Memory
Connected Network [21.977093907114217]
Single image super-resolution is crucial for many applications such as target detection and image classification.
We propose a novel method named deep memory connected network (DMCN) based on a convolutional neural network to reconstruct high-quality super-resolution images.
arXiv Detail & Related papers (2020-10-01T15:06:02Z) - Spatial-Angular Attention Network for Light Field Reconstruction [64.27343801968226]
We propose a spatial-angular attention network to perceive correspondences in the light field non-locally.
Motivated by the non-local attention mechanism, a spatial-angular attention module is introduced to compute the responses from all the positions in the epipolar plane for each pixel in the light field.
We then propose a multi-scale reconstruction structure to efficiently implement the non-local attention in the low spatial scale.
arXiv Detail & Related papers (2020-07-05T06:55:29Z) - Learning Light Field Angular Super-Resolution via a Geometry-Aware
Network [101.59693839475783]
We propose an end-to-end learning-based approach aiming at angularly super-resolving a sparsely-sampled light field with a large baseline.
Our method improves the PSNR of the second best method up to 2 dB in average, while saves the execution time 48$times$.
arXiv Detail & Related papers (2020-02-26T02:36:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.