View Adaptive Light Field Deblurring Networks with Depth Perception
- URL: http://arxiv.org/abs/2303.06860v1
- Date: Mon, 13 Mar 2023 05:08:25 GMT
- Title: View Adaptive Light Field Deblurring Networks with Depth Perception
- Authors: Zeqi Shen, Shuo Zhang, Zhuhao Zhang, Qihua Chen, Xueyao Dong, Youfang
Lin
- Abstract summary: The Light Field (LF) deblurring task is a challenging problem as the blur images are caused by different reasons like the camera shake and the object motion.
We introduce an angular position embedding to maintain the LF structure better, which ensures the model correctly restores the view information.
- Score: 21.55572150383203
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The Light Field (LF) deblurring task is a challenging problem as the blur
images are caused by different reasons like the camera shake and the object
motion. The single image deblurring method is a possible way to solve this
problem. However, since it deals with each view independently and cannot
effectively utilize and maintain the LF structure, the restoration effect is
usually not ideal. Besides, the LF blur is more complex because the degree is
affected by the views and depth. Therefore, we carefully designed a novel LF
deblurring network based on the LF blur characteristics. On one hand, since the
blur degree varies a lot in different views, we design a novel view adaptive
spatial convolution to deblur blurred LFs, which calculates the exclusive
convolution kernel for each view. On the other hand, because the blur degree
also varies with the depth of the object, a depth perception view attention is
designed to deblur different depth areas by selectively integrating information
from different views. Besides, we introduce an angular position embedding to
maintain the LF structure better, which ensures the model correctly restores
the view information. Quantitative and qualitative experimental results on
synthetic and real images show that the deblurring effect of our method is
better than other state-of-the-art methods.
Related papers
- Arbitrary Volumetric Refocusing of Dense and Sparse Light Fields [3.114475381459836]
We propose an end-to-end pipeline to simultaneously refocus multiple arbitrary regions of a dense or a sparse light field.
We employ pixel-dependent shifts with the typical shift-and-sum method to refocus an LF.
We employ a deep learning model based on U-Net architecture to almost completely eliminate the ghosting artifacts.
arXiv Detail & Related papers (2025-02-26T15:47:23Z) - Diffusion-based Light Field Synthesis [50.24624071354433]
LFdiff is a diffusion-based generative framework tailored for LF synthesis.
We propose DistgUnet, a disentanglement-based noise estimation network, to harness comprehensive LF representations.
Extensive experiments demonstrate that LFdiff excels in synthesizing visually pleasing and disparity-controllable light fields.
arXiv Detail & Related papers (2024-02-01T13:13:16Z) - Enhancing Low-light Light Field Images with A Deep Compensation Unfolding Network [52.77569396659629]
This paper presents the deep compensation network unfolding (DCUNet) for restoring light field (LF) images captured under low-light conditions.
The framework uses the intermediate enhanced result to estimate the illumination map, which is then employed in the unfolding process to produce a new enhanced result.
To properly leverage the unique characteristics of LF images, this paper proposes a pseudo-explicit feature interaction module.
arXiv Detail & Related papers (2023-08-10T07:53:06Z) - End-to-end Learning for Joint Depth and Image Reconstruction from
Diffracted Rotation [10.896567381206715]
We propose a novel end-to-end learning approach for depth from diffracted rotation.
Our approach requires a significantly less complex model and less training data, yet it is superior to existing methods in the task of monocular depth estimation.
arXiv Detail & Related papers (2022-04-14T16:14:37Z) - Disentangling Light Fields for Super-Resolution and Disparity Estimation [67.50796924758221]
Light field (LF) cameras record both intensity and directions of light rays, and encode 3D scenes into 4D LF images.
It is challenging for convolutional neural networks (CNNs) to process LF images since the spatial and angular information are highly inter-twined with varying disparities.
We propose a generic mechanism to disentangle these coupled information for LF image processing.
arXiv Detail & Related papers (2022-02-22T01:04:41Z) - Revisiting Light Field Rendering with Deep Anti-Aliasing Neural Network [51.90655635745856]
In this paper, we revisit the classic LF rendering framework to address both challenges by incorporating it with advanced deep learning techniques.
First, we analytically show that the essential issue behind the large disparity and non-Lambertian challenges is the aliasing problem.
We introduce an alternative framework to perform anti-aliasing reconstruction in the image domain and analytically show comparable efficacy on the aliasing issue.
arXiv Detail & Related papers (2021-04-14T12:03:25Z) - Deep Selective Combinatorial Embedding and Consistency Regularization
for Light Field Super-resolution [93.95828097088608]
Light field (LF) images acquired by hand-held devices usually suffer from low spatial resolution.
The high-dimensionality characteristic and complex geometrical structure of LF images make the problem more challenging than traditional single-image SR.
We propose a novel learning-based LF spatial SR framework to explore the coherence among LF sub-aperture images.
Experimental results over both synthetic and real-world LF datasets demonstrate the significant advantage of our approach over state-of-the-art methods.
arXiv Detail & Related papers (2020-09-26T08:34:37Z) - Light Field Image Super-Resolution Using Deformable Convolution [46.03974092854241]
We propose a deformable convolution network (i.e., LF-DFnet) to handle the disparity problem for LF image SR.
Our LF-DFnet can generate high-resolution images with more faithful details and achieve state-of-the-art reconstruction accuracy.
arXiv Detail & Related papers (2020-07-07T15:07:33Z) - Light Field Spatial Super-resolution via Deep Combinatorial Geometry
Embedding and Structural Consistency Regularization [99.96632216070718]
Light field (LF) images acquired by hand-held devices usually suffer from low spatial resolution.
The high-dimensional spatiality characteristic and complex geometrical structure of LF images make the problem more challenging than traditional single-image SR.
We propose a novel learning-based LF framework, in which each view of an LF image is first individually super-resolved.
arXiv Detail & Related papers (2020-04-05T14:39:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.