Multi-Attention Generative Adversarial Network for Remote Sensing Image
Super-Resolution
- URL: http://arxiv.org/abs/2107.06536v1
- Date: Wed, 14 Jul 2021 08:06:19 GMT
- Title: Multi-Attention Generative Adversarial Network for Remote Sensing Image
Super-Resolution
- Authors: Meng Xu, Zhihao Wang, Jiasong Zhu, Xiuping Jia, Sen Jia
- Abstract summary: Image super-resolution (SR) methods can generate remote sensing images with high spatial resolution without increasing the cost.
We propose a network based on the generative adversarial network (GAN) to generate high resolution remote sensing images.
- Score: 17.04588012373861
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Image super-resolution (SR) methods can generate remote sensing images with
high spatial resolution without increasing the cost, thereby providing a
feasible way to acquire high-resolution remote sensing images, which are
difficult to obtain due to the high cost of acquisition equipment and complex
weather. Clearly, image super-resolution is a severe ill-posed problem.
Fortunately, with the development of deep learning, the powerful fitting
ability of deep neural networks has solved this problem to some extent. In this
paper, we propose a network based on the generative adversarial network (GAN)
to generate high resolution remote sensing images, named the multi-attention
generative adversarial network (MA-GAN). We first designed a GAN-based
framework for the image SR task. The core to accomplishing the SR task is the
image generator with post-upsampling that we designed. The main body of the
generator contains two blocks; one is the pyramidal convolution in the
residual-dense block (PCRDB), and the other is the attention-based upsample
(AUP) block. The attentioned pyramidal convolution (AttPConv) in the PCRDB
block is a module that combines multi-scale convolution and channel attention
to automatically learn and adjust the scaling of the residuals for better
results. The AUP block is a module that combines pixel attention (PA) to
perform arbitrary multiples of upsampling. These two blocks work together to
help generate better quality images. For the loss function, we design a loss
function based on pixel loss and introduce both adversarial loss and feature
loss to guide the generator learning. We have compared our method with several
state-of-the-art methods on a remote sensing scene image dataset, and the
experimental results consistently demonstrate the effectiveness of the proposed
MA-GAN.
Related papers
- SRTransGAN: Image Super-Resolution using Transformer based Generative
Adversarial Network [16.243363392717434]
We propose a transformer-based encoder-decoder network as a generator to generate 2x images and 4x images.
The proposed SRTransGAN outperforms the existing methods by 4.38 % on an average of PSNR and SSIM scores.
arXiv Detail & Related papers (2023-12-04T16:22:39Z) - Spatially-Adaptive Feature Modulation for Efficient Image
Super-Resolution [90.16462805389943]
We develop a spatially-adaptive feature modulation (SAFM) mechanism upon a vision transformer (ViT)-like block.
Proposed method is $3times$ smaller than state-of-the-art efficient SR methods.
arXiv Detail & Related papers (2023-02-27T14:19:31Z) - Decoupled-and-Coupled Networks: Self-Supervised Hyperspectral Image
Super-Resolution with Subpixel Fusion [67.35540259040806]
We propose a subpixel-level HS super-resolution framework by devising a novel decoupled-and-coupled network, called DCNet.
As the name suggests, DC-Net first decouples the input into common (or cross-sensor) and sensor-specific components.
We append a self-supervised learning module behind the CSU net by guaranteeing the material consistency to enhance the detailed appearances of the restored HS product.
arXiv Detail & Related papers (2022-05-07T23:40:36Z) - Learning Enriched Features for Fast Image Restoration and Enhancement [166.17296369600774]
This paper presents a holistic goal of maintaining spatially-precise high-resolution representations through the entire network.
We learn an enriched set of features that combines contextual information from multiple scales, while simultaneously preserving the high-resolution spatial details.
Our approach achieves state-of-the-art results for a variety of image processing tasks, including defocus deblurring, image denoising, super-resolution, and image enhancement.
arXiv Detail & Related papers (2022-04-19T17:59:45Z) - Hybrid Pixel-Unshuffled Network for Lightweight Image Super-Resolution [64.54162195322246]
Convolutional neural network (CNN) has achieved great success on image super-resolution (SR)
Most deep CNN-based SR models take massive computations to obtain high performance.
We propose a novel Hybrid Pixel-Unshuffled Network (HPUN) by introducing an efficient and effective downsampling module into the SR task.
arXiv Detail & Related papers (2022-03-16T20:10:41Z) - TWIST-GAN: Towards Wavelet Transform and Transferred GAN for
Spatio-Temporal Single Image Super Resolution [4.622977798361014]
Single Image Super-resolution (SISR) produces high-resolution images with fine spatial resolutions from a remotely sensed image with low spatial resolution.
Deep learning and generative adversarial networks (GANs) have made breakthroughs for the challenging task of single image super-resolution (SISR)
arXiv Detail & Related papers (2021-04-20T22:12:38Z) - Super-Resolution of Real-World Faces [3.4376560669160394]
Real low-resolution (LR) face images contain degradations which are too varied and complex to be captured by known downsampling kernels.
In this paper, we propose a two module super-resolution network where the feature extractor module extracts robust features from the LR image.
We train a degradation GAN to convert bicubically downsampled clean images to real degraded images, and interpolate between the obtained degraded LR image and its clean LR counterpart.
arXiv Detail & Related papers (2020-11-04T17:25:54Z) - Perceptual Extreme Super Resolution Network with Receptive Field Block [11.557328975199043]
We develop a super resolution network with receptive field block based on Enhanced SRGAN.
RFB-ESRGAN has achieved competitive results in object detection and classification.
arXiv Detail & Related papers (2020-05-26T09:38:33Z) - Deep Generative Adversarial Residual Convolutional Networks for
Real-World Super-Resolution [31.934084942626257]
We propose a deep Super-Resolution Residual Convolutional Generative Adversarial Network (SRResCGAN)
It follows the real-world degradation settings by adversarial training the model with pixel-wise supervision in the HR domain from its generated LR counterpart.
The proposed network exploits the residual learning by minimizing the energy-based objective function with powerful image regularization and convex optimization techniques.
arXiv Detail & Related papers (2020-05-03T00:12:38Z) - Learning Enriched Features for Real Image Restoration and Enhancement [166.17296369600774]
convolutional neural networks (CNNs) have achieved dramatic improvements over conventional approaches for image restoration task.
We present a novel architecture with the collective goals of maintaining spatially-precise high-resolution representations through the entire network.
Our approach learns an enriched set of features that combines contextual information from multiple scales, while simultaneously preserving the high-resolution spatial details.
arXiv Detail & Related papers (2020-03-15T11:04:30Z) - Gated Fusion Network for Degraded Image Super Resolution [78.67168802945069]
We propose a dual-branch convolutional neural network to extract base features and recovered features separately.
By decomposing the feature extraction step into two task-independent streams, the dual-branch model can facilitate the training process.
arXiv Detail & Related papers (2020-03-02T13:28:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.