Sub-Aperture Feature Adaptation in Single Image Super-resolution Model
for Light Field Imaging
- URL: http://arxiv.org/abs/2207.11894v2
- Date: Tue, 26 Jul 2022 04:01:56 GMT
- Title: Sub-Aperture Feature Adaptation in Single Image Super-resolution Model
for Light Field Imaging
- Authors: Aupendu Kar, Suresh Nehra, Jayanta Mukhopadhyay, Prabir Kumar Biswas
- Abstract summary: This paper proposes an adaptation module in a pretrained Single Image Super Resolution (SISR) network to leverage the powerful SISR model.
It is an adaptation in the SISR network to further exploit the spatial and angular information in LF images to improve the super resolution performance.
- Score: 17.721259583120396
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: With the availability of commercial Light Field (LF) cameras, LF imaging has
emerged as an up and coming technology in computational photography. However,
the spatial resolution is significantly constrained in commercial microlens
based LF cameras because of the inherent multiplexing of spatial and angular
information. Therefore, it becomes the main bottleneck for other applications
of light field cameras. This paper proposes an adaptation module in a
pretrained Single Image Super Resolution (SISR) network to leverage the
powerful SISR model instead of using highly engineered light field imaging
domain specific Super Resolution models. The adaption module consists of a Sub
aperture Shift block and a fusion block. It is an adaptation in the SISR
network to further exploit the spatial and angular information in LF images to
improve the super resolution performance. Experimental validation shows that
the proposed method outperforms existing light field super resolution
algorithms. It also achieves PSNR gains of more than 1 dB across all the
datasets as compared to the same pretrained SISR models for scale factor 2, and
PSNR gains 0.6 to 1 dB for scale factor 4.
Related papers
- LGFN: Lightweight Light Field Image Super-Resolution using Local Convolution Modulation and Global Attention Feature Extraction [5.461017270708014]
We propose a lightweight model named LGFN which integrates the local and global features of different views and the features of different channels for LF image SR.
Our model has a parameter of 0.45M and a FLOPs of 19.33G which has achieved a competitive effect.
arXiv Detail & Related papers (2024-09-26T11:53:25Z) - Light Field Spatial Resolution Enhancement Framework [0.24578723416255746]
We propose a novel light field framework for resolution enhancement.
The first module generates a high-resolution, all-in-focus image.
The second module, a texture transformer network, enhances the resolution of each light field perspective independently.
arXiv Detail & Related papers (2024-05-05T02:07:10Z) - Enhancing Low-light Light Field Images with A Deep Compensation Unfolding Network [52.77569396659629]
This paper presents the deep compensation network unfolding (DCUNet) for restoring light field (LF) images captured under low-light conditions.
The framework uses the intermediate enhanced result to estimate the illumination map, which is then employed in the unfolding process to produce a new enhanced result.
To properly leverage the unique characteristics of LF images, this paper proposes a pseudo-explicit feature interaction module.
arXiv Detail & Related papers (2023-08-10T07:53:06Z) - L1BSR: Exploiting Detector Overlap for Self-Supervised Single-Image
Super-Resolution of Sentinel-2 L1B Imagery [14.233972890133769]
High-resolution satellite imagery is a key element for many Earth monitoring applications.
The lack of reliable high-resolution ground truth limits the application of deep learning methods to this task.
We propose L1BSR, a deep learning-based method for single-image super-resolution and band alignment of Sentinel-2 L1B 10m bands.
arXiv Detail & Related papers (2023-04-14T00:17:57Z) - Learning Texture Transformer Network for Light Field Super-Resolution [1.5469452301122173]
We propose a method to improve the spatial resolution of light field images with the aid of the Transformer Network (TTSR)
The results demonstrate around 4 dB to 6 dB PSNR gain over a bicubically resized light field image.
arXiv Detail & Related papers (2022-10-09T15:16:07Z) - Deep Burst Super-Resolution [165.90445859851448]
We propose a novel architecture for the burst super-resolution task.
Our network takes multiple noisy RAW images as input, and generates a denoised, super-resolved RGB image as output.
In order to enable training and evaluation on real-world data, we additionally introduce the BurstSR dataset.
arXiv Detail & Related papers (2021-01-26T18:57:21Z) - Deep Selective Combinatorial Embedding and Consistency Regularization
for Light Field Super-resolution [93.95828097088608]
Light field (LF) images acquired by hand-held devices usually suffer from low spatial resolution.
The high-dimensionality characteristic and complex geometrical structure of LF images make the problem more challenging than traditional single-image SR.
We propose a novel learning-based LF spatial SR framework to explore the coherence among LF sub-aperture images.
Experimental results over both synthetic and real-world LF datasets demonstrate the significant advantage of our approach over state-of-the-art methods.
arXiv Detail & Related papers (2020-09-26T08:34:37Z) - Deep Generative Adversarial Residual Convolutional Networks for
Real-World Super-Resolution [31.934084942626257]
We propose a deep Super-Resolution Residual Convolutional Generative Adversarial Network (SRResCGAN)
It follows the real-world degradation settings by adversarial training the model with pixel-wise supervision in the HR domain from its generated LR counterpart.
The proposed network exploits the residual learning by minimizing the energy-based objective function with powerful image regularization and convex optimization techniques.
arXiv Detail & Related papers (2020-05-03T00:12:38Z) - Light Field Spatial Super-resolution via Deep Combinatorial Geometry
Embedding and Structural Consistency Regularization [99.96632216070718]
Light field (LF) images acquired by hand-held devices usually suffer from low spatial resolution.
The high-dimensional spatiality characteristic and complex geometrical structure of LF images make the problem more challenging than traditional single-image SR.
We propose a novel learning-based LF framework, in which each view of an LF image is first individually super-resolved.
arXiv Detail & Related papers (2020-04-05T14:39:57Z) - PULSE: Self-Supervised Photo Upsampling via Latent Space Exploration of
Generative Models [77.32079593577821]
PULSE (Photo Upsampling via Latent Space Exploration) generates high-resolution, realistic images at resolutions previously unseen in the literature.
Our method outperforms state-of-the-art methods in perceptual quality at higher resolutions and scale factors than previously possible.
arXiv Detail & Related papers (2020-03-08T16:44:31Z) - Learning Light Field Angular Super-Resolution via a Geometry-Aware
Network [101.59693839475783]
We propose an end-to-end learning-based approach aiming at angularly super-resolving a sparsely-sampled light field with a large baseline.
Our method improves the PSNR of the second best method up to 2 dB in average, while saves the execution time 48$times$.
arXiv Detail & Related papers (2020-02-26T02:36:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.