Harnessing Multi-View Perspective of Light Fields for Low-Light Imaging
- URL: http://arxiv.org/abs/2003.02438v2
- Date: Tue, 8 Dec 2020 11:16:25 GMT
- Title: Harnessing Multi-View Perspective of Light Fields for Low-Light Imaging
- Authors: Mohit Lamba, Kranthi Kumar, Kaushik Mitra
- Abstract summary: We propose a deep neural network for Low-Light Light Field (L3F) restoration.
The proposed L3Fnet not only performs the necessary visual enhancement of each LF view but also preserves the epipolar geometry across views.
We show that L3Fnet can also be used for low-light enhancement of single-frame images.
- Score: 26.264066009506678
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Light Field (LF) offers unique advantages such as post-capture refocusing and
depth estimation, but low-light conditions limit these capabilities. To restore
low-light LFs we should harness the geometric cues present in different LF
views, which is not possible using single-frame low-light enhancement
techniques. We, therefore, propose a deep neural network for Low-Light Light
Field (L3F) restoration, which we refer to as L3Fnet. The proposed L3Fnet not
only performs the necessary visual enhancement of each LF view but also
preserves the epipolar geometry across views. We achieve this by adopting a
two-stage architecture for L3Fnet. Stage-I looks at all the LF views to encode
the LF geometry. This encoded information is then used in Stage-II to
reconstruct each LF view. To facilitate learning-based techniques for low-light
LF imaging, we collected a comprehensive LF dataset of various scenes. For each
scene, we captured four LFs, one with near-optimal exposure and ISO settings
and the others at different levels of low-light conditions varying from low to
extreme low-light settings. The effectiveness of the proposed L3Fnet is
supported by both visual and numerical comparisons on this dataset. To further
analyze the performance of low-light reconstruction methods, we also propose an
L3F-wild dataset that contains LF captured late at night with almost zero lux
values. No ground truth is available in this dataset. To perform well on the
L3F-wild dataset, any method must adapt to the light level of the captured
scene. To do this we propose a novel pre-processing block that makes L3Fnet
robust to various degrees of low-light conditions. Lastly, we show that L3Fnet
can also be used for low-light enhancement of single-frame images, despite it
being engineered for LF data. We do so by converting the single-frame DSLR
image into a form suitable to L3Fnet, which we call as pseudo-LF.
Related papers
- Arbitrary Volumetric Refocusing of Dense and Sparse Light Fields [3.114475381459836]
We propose an end-to-end pipeline to simultaneously refocus multiple arbitrary regions of a dense or a sparse light field.
We employ pixel-dependent shifts with the typical shift-and-sum method to refocus an LF.
We employ a deep learning model based on U-Net architecture to almost completely eliminate the ghosting artifacts.
arXiv Detail & Related papers (2025-02-26T15:47:23Z) - LGFN: Lightweight Light Field Image Super-Resolution using Local Convolution Modulation and Global Attention Feature Extraction [5.461017270708014]
We propose a lightweight model named LGFN which integrates the local and global features of different views and the features of different channels for LF image SR.
Our model has a parameter of 0.45M and a FLOPs of 19.33G which has achieved a competitive effect.
arXiv Detail & Related papers (2024-09-26T11:53:25Z) - LFIC-DRASC: Deep Light Field Image Compression Using Disentangled Representation and Asymmetrical Strip Convolution [51.909036244222904]
We propose an end-to-end deep LF Image Compression method using Disentangled Representation and Asymmetrical Strip Convolution.
Experimental results demonstrate that the proposed LFIC-DRASC achieves an average of 20.5% bit rate reductions.
arXiv Detail & Related papers (2024-09-18T05:33:42Z) - Diffusion-based Light Field Synthesis [50.24624071354433]
LFdiff is a diffusion-based generative framework tailored for LF synthesis.
We propose DistgUnet, a disentanglement-based noise estimation network, to harness comprehensive LF representations.
Extensive experiments demonstrate that LFdiff excels in synthesizing visually pleasing and disparity-controllable light fields.
arXiv Detail & Related papers (2024-02-01T13:13:16Z) - Enhancing Low-light Light Field Images with A Deep Compensation Unfolding Network [52.77569396659629]
This paper presents the deep compensation network unfolding (DCUNet) for restoring light field (LF) images captured under low-light conditions.
The framework uses the intermediate enhanced result to estimate the illumination map, which is then employed in the unfolding process to produce a new enhanced result.
To properly leverage the unique characteristics of LF images, this paper proposes a pseudo-explicit feature interaction module.
arXiv Detail & Related papers (2023-08-10T07:53:06Z) - I See-Through You: A Framework for Removing Foreground Occlusion in Both
Sparse and Dense Light Field Images [25.21481624956202]
A light field (LF) camera captures rich information from a scene. Using the information, the LF de-occlusion (LF-DeOcc) task aims to reconstruct the occlusion-free center view image.
We propose a framework, ISTY, which is defined and divided into three roles: (1) extract LF features, (2) define the occlusion, and (3) inpaint occluded regions.
In experiments, qualitative and quantitative results show that the proposed framework outperforms state-of-the-art LF-DeOcc methods in both sparse and dense LF datasets.
arXiv Detail & Related papers (2023-01-16T12:25:42Z) - Progressively-connected Light Field Network for Efficient View Synthesis [69.29043048775802]
We present a Progressively-connected Light Field network (ProLiF) for the novel view synthesis of complex forward-facing scenes.
ProLiF encodes a 4D light field, which allows rendering a large batch of rays in one training step for image- or patch-level losses.
arXiv Detail & Related papers (2022-07-10T13:47:20Z) - Real-World Light Field Image Super-Resolution via Degradation Modulation [59.68036846233918]
We propose a simple yet effective method for real-world LF image SR.
A practical LF degradation model is developed to formulate the degradation process of real LF images.
A convolutional neural network is designed to incorporate the degradation prior to the SR process.
arXiv Detail & Related papers (2022-06-13T14:44:46Z) - Disentangling Light Fields for Super-Resolution and Disparity Estimation [67.50796924758221]
Light field (LF) cameras record both intensity and directions of light rays, and encode 3D scenes into 4D LF images.
It is challenging for convolutional neural networks (CNNs) to process LF images since the spatial and angular information are highly inter-twined with varying disparities.
We propose a generic mechanism to disentangle these coupled information for LF image processing.
arXiv Detail & Related papers (2022-02-22T01:04:41Z) - Light field Rectification based on relative pose estimation [5.888941251567256]
Hand-held light field (LF) cameras have unique advantages in computer vision such as 3D scene reconstruction and depth estimation.
We propose to rectify LF to obtain a large baseline. Specifically, the proposed method aligns two LFs captured by two hand-held LF cameras with a random relative pose.
For an accurate rectification, a method for pose estimation is also proposed, where the relative rotation and translation between the two LF cameras are estimated.
arXiv Detail & Related papers (2022-01-29T08:57:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.