Light Field Image Super-Resolution Using Deformable Convolution
- URL: http://arxiv.org/abs/2007.03535v4
- Date: Wed, 25 Nov 2020 12:01:05 GMT
- Title: Light Field Image Super-Resolution Using Deformable Convolution
- Authors: Yingqian Wang, Jungang Yang, Longguang Wang, Xinyi Ying, Tianhao Wu,
Wei An, Yulan Guo
- Abstract summary: We propose a deformable convolution network (i.e., LF-DFnet) to handle the disparity problem for LF image SR.
Our LF-DFnet can generate high-resolution images with more faithful details and achieve state-of-the-art reconstruction accuracy.
- Score: 46.03974092854241
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Light field (LF) cameras can record scenes from multiple perspectives, and
thus introduce beneficial angular information for image super-resolution (SR).
However, it is challenging to incorporate angular information due to
disparities among LF images. In this paper, we propose a deformable convolution
network (i.e., LF-DFnet) to handle the disparity problem for LF image SR.
Specifically, we design an angular deformable alignment module (ADAM) for
feature-level alignment. Based on ADAM, we further propose a
collect-and-distribute approach to perform bidirectional alignment between the
center-view feature and each side-view feature. Using our approach, angular
information can be well incorporated and encoded into features of each view,
which benefits the SR reconstruction of all LF images. Moreover, we develop a
baseline-adjustable LF dataset to evaluate SR performance under different
disparity variations. Experiments on both public and our self-developed
datasets have demonstrated the superiority of our method. Our LF-DFnet can
generate high-resolution images with more faithful details and achieve
state-of-the-art reconstruction accuracy. Besides, our LF-DFnet is more robust
to disparity variations, which has not been well addressed in literature.
Related papers
- LFIC-DRASC: Deep Light Field Image Compression Using Disentangled Representation and Asymmetrical Strip Convolution [51.909036244222904]
We propose an end-to-end deep LF Image Compression method using Disentangled Representation and Asymmetrical Strip Convolution.
Experimental results demonstrate that the proposed LFIC-DRASC achieves an average of 20.5% bit rate reductions.
arXiv Detail & Related papers (2024-09-18T05:33:42Z) - Diffusion-based Light Field Synthesis [50.24624071354433]
LFdiff is a diffusion-based generative framework tailored for LF synthesis.
We propose DistgUnet, a disentanglement-based noise estimation network, to harness comprehensive LF representations.
Extensive experiments demonstrate that LFdiff excels in synthesizing visually pleasing and disparity-controllable light fields.
arXiv Detail & Related papers (2024-02-01T13:13:16Z) - Enhancing Low-light Light Field Images with A Deep Compensation Unfolding Network [52.77569396659629]
This paper presents the deep compensation network unfolding (DCUNet) for restoring light field (LF) images captured under low-light conditions.
The framework uses the intermediate enhanced result to estimate the illumination map, which is then employed in the unfolding process to produce a new enhanced result.
To properly leverage the unique characteristics of LF images, this paper proposes a pseudo-explicit feature interaction module.
arXiv Detail & Related papers (2023-08-10T07:53:06Z) - Physics-Informed Ensemble Representation for Light-Field Image
Super-Resolution [12.156009287223382]
We analyze the coordinate transformation of the light field (LF) imaging process to reveal the geometric relationship in the LF images.
We introduce a new LF subspace of virtual-slit images (VSI) that provide sub-pixel information complementary to sub-aperture images.
To super-resolve image structures from undersampled LF data, we propose a geometry-aware decoder, named EPIXformer.
arXiv Detail & Related papers (2023-05-31T16:27:00Z) - Learning Non-Local Spatial-Angular Correlation for Light Field Image
Super-Resolution [36.69391399634076]
Exploiting spatial-angular correlation is crucial to light field (LF) image super-resolution (SR)
We propose a simple yet effective method to learn the non-local spatial-angular correlation for LF image SR.
Our method can fully incorporate the information from all angular views while achieving a global receptive field along the epipolar line.
arXiv Detail & Related papers (2023-02-16T03:40:40Z) - Light Field Image Super-Resolution with Transformers [11.104338786168324]
CNN-based methods have achieved remarkable performance in LF image SR.
We propose a simple but effective Transformer-based method for LF image SR.
Our method achieves superior SR performance with a small model size and low computational cost.
arXiv Detail & Related papers (2021-08-17T12:58:11Z) - Deep Selective Combinatorial Embedding and Consistency Regularization
for Light Field Super-resolution [93.95828097088608]
Light field (LF) images acquired by hand-held devices usually suffer from low spatial resolution.
The high-dimensionality characteristic and complex geometrical structure of LF images make the problem more challenging than traditional single-image SR.
We propose a novel learning-based LF spatial SR framework to explore the coherence among LF sub-aperture images.
Experimental results over both synthetic and real-world LF datasets demonstrate the significant advantage of our approach over state-of-the-art methods.
arXiv Detail & Related papers (2020-09-26T08:34:37Z) - Light Field Spatial Super-resolution via Deep Combinatorial Geometry
Embedding and Structural Consistency Regularization [99.96632216070718]
Light field (LF) images acquired by hand-held devices usually suffer from low spatial resolution.
The high-dimensional spatiality characteristic and complex geometrical structure of LF images make the problem more challenging than traditional single-image SR.
We propose a novel learning-based LF framework, in which each view of an LF image is first individually super-resolved.
arXiv Detail & Related papers (2020-04-05T14:39:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.