Real-World Light Field Image Super-Resolution via Degradation Modulation
- URL: http://arxiv.org/abs/2206.06214v2
- Date: Thu, 30 Nov 2023 15:28:20 GMT
- Title: Real-World Light Field Image Super-Resolution via Degradation Modulation
- Authors: Yingqian Wang, Zhengyu Liang, Longguang Wang, Jungang Yang, Wei An,
Yulan Guo
- Abstract summary: We propose a simple yet effective method for real-world LF image SR.
A practical LF degradation model is developed to formulate the degradation process of real LF images.
A convolutional neural network is designed to incorporate the degradation prior to the SR process.
- Score: 59.68036846233918
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent years have witnessed the great advances of deep neural networks (DNNs)
in light field (LF) image super-resolution (SR). However, existing DNN-based LF
image SR methods are developed on a single fixed degradation (e.g., bicubic
downsampling), and thus cannot be applied to super-resolve real LF images with
diverse degradation. In this paper, we propose a simple yet effective method
for real-world LF image SR. In our method, a practical LF degradation model is
developed to formulate the degradation process of real LF images. Then, a
convolutional neural network is designed to incorporate the degradation prior
into the SR process. By training on LF images using our formulated degradation,
our network can learn to modulate different degradation while incorporating
both spatial and angular information in LF images. Extensive experiments on
both synthetically degraded and real-world LF images demonstrate the
effectiveness of our method. Compared with existing state-of-the-art single and
LF image SR methods, our method achieves superior SR performance under a wide
range of degradation, and generalizes better to real LF images. Codes and
models are available at https://yingqianwang.github.io/LF-DMnet/.
Related papers
- LFIC-DRASC: Deep Light Field Image Compression Using Disentangled Representation and Asymmetrical Strip Convolution [51.909036244222904]
We propose an end-to-end deep LF Image Compression method using Disentangled Representation and Asymmetrical Strip Convolution.
Experimental results demonstrate that the proposed LFIC-DRASC achieves an average of 20.5% bit rate reductions.
arXiv Detail & Related papers (2024-09-18T05:33:42Z) - Incorporating Degradation Estimation in Light Field Spatial Super-Resolution [54.603510192725786]
We present LF-DEST, an effective blind Light Field SR method that incorporates explicit Degradation Estimation to handle various degradation types.
We conduct extensive experiments on benchmark datasets, demonstrating that LF-DEST achieves superior performance across a variety of degradation scenarios in light field SR.
arXiv Detail & Related papers (2024-05-11T13:14:43Z) - LFSRDiff: Light Field Image Super-Resolution via Diffusion Models [18.20217829625834]
Light field (LF) image super-resolution (SR) is a challenging problem due to its inherent ill-posed nature.
mainstream LF image SR methods typically adopt a deterministic approach, generating only a single output supervised by pixel-wise loss functions.
We introduce LFSRDiff, the first diffusion-based LF image SR model, by incorporating the LF disentanglement mechanism.
arXiv Detail & Related papers (2023-11-27T07:31:12Z) - Enhancing Low-light Light Field Images with A Deep Compensation Unfolding Network [52.77569396659629]
This paper presents the deep compensation network unfolding (DCUNet) for restoring light field (LF) images captured under low-light conditions.
The framework uses the intermediate enhanced result to estimate the illumination map, which is then employed in the unfolding process to produce a new enhanced result.
To properly leverage the unique characteristics of LF images, this paper proposes a pseudo-explicit feature interaction module.
arXiv Detail & Related papers (2023-08-10T07:53:06Z) - Light Field Image Super-Resolution with Transformers [11.104338786168324]
CNN-based methods have achieved remarkable performance in LF image SR.
We propose a simple but effective Transformer-based method for LF image SR.
Our method achieves superior SR performance with a small model size and low computational cost.
arXiv Detail & Related papers (2021-08-17T12:58:11Z) - Frequency Consistent Adaptation for Real World Super Resolution [64.91914552787668]
We propose a novel Frequency Consistent Adaptation (FCA) that ensures the frequency domain consistency when applying Super-Resolution (SR) methods to the real scene.
We estimate degradation kernels from unsupervised images and generate the corresponding Low-Resolution (LR) images.
Based on the domain-consistent LR-HR pairs, we train easy-implemented Convolutional Neural Network (CNN) SR models.
arXiv Detail & Related papers (2020-12-18T08:25:39Z) - Deep Selective Combinatorial Embedding and Consistency Regularization
for Light Field Super-resolution [93.95828097088608]
Light field (LF) images acquired by hand-held devices usually suffer from low spatial resolution.
The high-dimensionality characteristic and complex geometrical structure of LF images make the problem more challenging than traditional single-image SR.
We propose a novel learning-based LF spatial SR framework to explore the coherence among LF sub-aperture images.
Experimental results over both synthetic and real-world LF datasets demonstrate the significant advantage of our approach over state-of-the-art methods.
arXiv Detail & Related papers (2020-09-26T08:34:37Z) - Light Field Image Super-Resolution Using Deformable Convolution [46.03974092854241]
We propose a deformable convolution network (i.e., LF-DFnet) to handle the disparity problem for LF image SR.
Our LF-DFnet can generate high-resolution images with more faithful details and achieve state-of-the-art reconstruction accuracy.
arXiv Detail & Related papers (2020-07-07T15:07:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.