Arbitrary Volumetric Refocusing of Dense and Sparse Light Fields
- URL: http://arxiv.org/abs/2502.19238v1
- Date: Wed, 26 Feb 2025 15:47:23 GMT
- Title: Arbitrary Volumetric Refocusing of Dense and Sparse Light Fields
- Authors: Tharindu Samarakoon, Kalana Abeywardena, Chamira U. S. Edussooriya,
- Abstract summary: We propose an end-to-end pipeline to simultaneously refocus multiple arbitrary regions of a dense or a sparse light field.<n>We employ pixel-dependent shifts with the typical shift-and-sum method to refocus an LF.<n>We employ a deep learning model based on U-Net architecture to almost completely eliminate the ghosting artifacts.
- Score: 3.114475381459836
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A four-dimensional light field (LF) captures both textural and geometrical information of a scene in contrast to a two-dimensional image that captures only the textural information of a scene. Post-capture refocusing is an exciting application of LFs enabled by the geometric information captured. Previously proposed LF refocusing methods are mostly limited to the refocusing of single planar or volumetric region of a scene corresponding to a depth range and cannot simultaneously generate in-focus and out-of-focus regions having the same depth range. In this paper, we propose an end-to-end pipeline to simultaneously refocus multiple arbitrary planar or volumetric regions of a dense or a sparse LF. We employ pixel-dependent shifts with the typical shift-and-sum method to refocus an LF. The pixel-dependent shifts enables to refocus each pixel of an LF independently. For sparse LFs, the shift-and-sum method introduces ghosting artifacts due to the spatial undersampling. We employ a deep learning model based on U-Net architecture to almost completely eliminate the ghosting artifacts. The experimental results obtained with several LF datasets confirm the effectiveness of the proposed method. In particular, sparse LFs refocused with the proposed method archive structural similarity index higher than 0.9 despite having only 20% of data compared to dense LFs.
Related papers
- LFIC-DRASC: Deep Light Field Image Compression Using Disentangled Representation and Asymmetrical Strip Convolution [51.909036244222904]
We propose an end-to-end deep LF Image Compression method using Disentangled Representation and Asymmetrical Strip Convolution.
Experimental results demonstrate that the proposed LFIC-DRASC achieves an average of 20.5% bit rate reductions.
arXiv Detail & Related papers (2024-09-18T05:33:42Z) - Diffusion-based Light Field Synthesis [50.24624071354433]
LFdiff is a diffusion-based generative framework tailored for LF synthesis.
We propose DistgUnet, a disentanglement-based noise estimation network, to harness comprehensive LF representations.
Extensive experiments demonstrate that LFdiff excels in synthesizing visually pleasing and disparity-controllable light fields.
arXiv Detail & Related papers (2024-02-01T13:13:16Z) - View Adaptive Light Field Deblurring Networks with Depth Perception [21.55572150383203]
The Light Field (LF) deblurring task is a challenging problem as the blur images are caused by different reasons like the camera shake and the object motion.
We introduce an angular position embedding to maintain the LF structure better, which ensures the model correctly restores the view information.
arXiv Detail & Related papers (2023-03-13T05:08:25Z) - I See-Through You: A Framework for Removing Foreground Occlusion in Both
Sparse and Dense Light Field Images [25.21481624956202]
A light field (LF) camera captures rich information from a scene. Using the information, the LF de-occlusion (LF-DeOcc) task aims to reconstruct the occlusion-free center view image.
We propose a framework, ISTY, which is defined and divided into three roles: (1) extract LF features, (2) define the occlusion, and (3) inpaint occluded regions.
In experiments, qualitative and quantitative results show that the proposed framework outperforms state-of-the-art LF-DeOcc methods in both sparse and dense LF datasets.
arXiv Detail & Related papers (2023-01-16T12:25:42Z) - Learning Single Image Defocus Deblurring with Misaligned Training Pairs [80.13320797431487]
We propose a joint deblurring and reblurring learning framework for single image defocus deblurring.
Our framework can be applied to boost defocus deblurring networks in terms of both quantitative metrics and visual quality.
arXiv Detail & Related papers (2022-11-26T07:36:33Z) - A learning-based view extrapolation method for axial super-resolution [52.748944517480155]
Axial light field resolution refers to the ability to distinguish features at different depths by refocusing.
We propose a learning-based method to extrapolate novel views from axial volumes of sheared epipolar plane images.
arXiv Detail & Related papers (2021-03-11T07:22:13Z) - Light Field Reconstruction via Deep Adaptive Fusion of Hybrid Lenses [67.01164492518481]
This paper explores the problem of reconstructing high-resolution light field (LF) images from hybrid lenses.
We propose a novel end-to-end learning-based approach, which can comprehensively utilize the specific characteristics of the input.
Our framework could potentially decrease the cost of high-resolution LF data acquisition and benefit LF data storage and transmission.
arXiv Detail & Related papers (2021-02-14T06:44:47Z) - Deep Anti-aliasing of Whole Focal Stack Using its Slice Spectrum [7.746179634773142]
The paper aims at removing the aliasing effects for the whole focal stack generated from a sparse 3D light field.
We first explore the structural characteristics embedded in the focal stack slice and its corresponding frequency-domain representation.
We also observe that the energy distribution of FSS always locates within the same triangular area under different angular sampling rates.
arXiv Detail & Related papers (2021-01-23T05:14:49Z) - Light Field Spatial Super-resolution via Deep Combinatorial Geometry
Embedding and Structural Consistency Regularization [99.96632216070718]
Light field (LF) images acquired by hand-held devices usually suffer from low spatial resolution.
The high-dimensional spatiality characteristic and complex geometrical structure of LF images make the problem more challenging than traditional single-image SR.
We propose a novel learning-based LF framework, in which each view of an LF image is first individually super-resolved.
arXiv Detail & Related papers (2020-04-05T14:39:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.