I See-Through You: A Framework for Removing Foreground Occlusion in Both
Sparse and Dense Light Field Images
- URL: http://arxiv.org/abs/2301.06392v1
- Date: Mon, 16 Jan 2023 12:25:42 GMT
- Title: I See-Through You: A Framework for Removing Foreground Occlusion in Both
Sparse and Dense Light Field Images
- Authors: Jiwan Hur, Jae Young Lee, Jaehyun Choi, and Junmo Kim
- Abstract summary: A light field (LF) camera captures rich information from a scene. Using the information, the LF de-occlusion (LF-DeOcc) task aims to reconstruct the occlusion-free center view image.
We propose a framework, ISTY, which is defined and divided into three roles: (1) extract LF features, (2) define the occlusion, and (3) inpaint occluded regions.
In experiments, qualitative and quantitative results show that the proposed framework outperforms state-of-the-art LF-DeOcc methods in both sparse and dense LF datasets.
- Score: 25.21481624956202
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Light field (LF) camera captures rich information from a scene. Using the
information, the LF de-occlusion (LF-DeOcc) task aims to reconstruct the
occlusion-free center view image. Existing LF-DeOcc studies mainly focus on the
sparsely sampled (sparse) LF images where most of the occluded regions are
visible in other views due to the large disparity. In this paper, we expand
LF-DeOcc in more challenging datasets, densely sampled (dense) LF images, which
are taken by a micro-lens-based portable LF camera. Due to the small disparity
ranges of dense LF images, most of the background regions are invisible in any
view. To apply LF-DeOcc in both LF datasets, we propose a framework, ISTY,
which is defined and divided into three roles: (1) extract LF features, (2)
define the occlusion, and (3) inpaint occluded regions. By dividing the
framework into three specialized components according to the roles, the
development and analysis can be easier. Furthermore, an explainable
intermediate representation, an occlusion mask, can be obtained in the proposed
framework. The occlusion mask is useful for comprehensive analysis of the model
and other applications by manipulating the mask. In experiments, qualitative
and quantitative results show that the proposed framework outperforms
state-of-the-art LF-DeOcc methods in both sparse and dense LF datasets.
Related papers
- Arbitrary Volumetric Refocusing of Dense and Sparse Light Fields [3.114475381459836]
We propose an end-to-end pipeline to simultaneously refocus multiple arbitrary regions of a dense or a sparse light field.
We employ pixel-dependent shifts with the typical shift-and-sum method to refocus an LF.
We employ a deep learning model based on U-Net architecture to almost completely eliminate the ghosting artifacts.
arXiv Detail & Related papers (2025-02-26T15:47:23Z) - LFIC-DRASC: Deep Light Field Image Compression Using Disentangled Representation and Asymmetrical Strip Convolution [51.909036244222904]
We propose an end-to-end deep LF Image Compression method using Disentangled Representation and Asymmetrical Strip Convolution.
Experimental results demonstrate that the proposed LFIC-DRASC achieves an average of 20.5% bit rate reductions.
arXiv Detail & Related papers (2024-09-18T05:33:42Z) - Diffusion-based Light Field Synthesis [50.24624071354433]
LFdiff is a diffusion-based generative framework tailored for LF synthesis.
We propose DistgUnet, a disentanglement-based noise estimation network, to harness comprehensive LF representations.
Extensive experiments demonstrate that LFdiff excels in synthesizing visually pleasing and disparity-controllable light fields.
arXiv Detail & Related papers (2024-02-01T13:13:16Z) - Enhancing Low-light Light Field Images with A Deep Compensation Unfolding Network [52.77569396659629]
This paper presents the deep compensation network unfolding (DCUNet) for restoring light field (LF) images captured under low-light conditions.
The framework uses the intermediate enhanced result to estimate the illumination map, which is then employed in the unfolding process to produce a new enhanced result.
To properly leverage the unique characteristics of LF images, this paper proposes a pseudo-explicit feature interaction module.
arXiv Detail & Related papers (2023-08-10T07:53:06Z) - Disentangling Light Fields for Super-Resolution and Disparity Estimation [67.50796924758221]
Light field (LF) cameras record both intensity and directions of light rays, and encode 3D scenes into 4D LF images.
It is challenging for convolutional neural networks (CNNs) to process LF images since the spatial and angular information are highly inter-twined with varying disparities.
We propose a generic mechanism to disentangle these coupled information for LF image processing.
arXiv Detail & Related papers (2022-02-22T01:04:41Z) - Deep Selective Combinatorial Embedding and Consistency Regularization
for Light Field Super-resolution [93.95828097088608]
Light field (LF) images acquired by hand-held devices usually suffer from low spatial resolution.
The high-dimensionality characteristic and complex geometrical structure of LF images make the problem more challenging than traditional single-image SR.
We propose a novel learning-based LF spatial SR framework to explore the coherence among LF sub-aperture images.
Experimental results over both synthetic and real-world LF datasets demonstrate the significant advantage of our approach over state-of-the-art methods.
arXiv Detail & Related papers (2020-09-26T08:34:37Z) - Light Field Spatial Super-resolution via Deep Combinatorial Geometry
Embedding and Structural Consistency Regularization [99.96632216070718]
Light field (LF) images acquired by hand-held devices usually suffer from low spatial resolution.
The high-dimensional spatiality characteristic and complex geometrical structure of LF images make the problem more challenging than traditional single-image SR.
We propose a novel learning-based LF framework, in which each view of an LF image is first individually super-resolved.
arXiv Detail & Related papers (2020-04-05T14:39:57Z) - Harnessing Multi-View Perspective of Light Fields for Low-Light Imaging [26.264066009506678]
We propose a deep neural network for Low-Light Light Field (L3F) restoration.
The proposed L3Fnet not only performs the necessary visual enhancement of each LF view but also preserves the epipolar geometry across views.
We show that L3Fnet can also be used for low-light enhancement of single-frame images.
arXiv Detail & Related papers (2020-03-05T05:32:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.