TWIST-GAN: Towards Wavelet Transform and Transferred GAN for
Spatio-Temporal Single Image Super Resolution
- URL: http://arxiv.org/abs/2104.10268v1
- Date: Tue, 20 Apr 2021 22:12:38 GMT
- Title: TWIST-GAN: Towards Wavelet Transform and Transferred GAN for
Spatio-Temporal Single Image Super Resolution
- Authors: Fayaz Ali Dharejo, Farah Deeba, Yuanchun Zhou, Bhagwan Das, Munsif Ali
Jatoi, Muhammad Zawish, Yi Du, and Xuezhi Wang
- Abstract summary: Single Image Super-resolution (SISR) produces high-resolution images with fine spatial resolutions from a remotely sensed image with low spatial resolution.
Deep learning and generative adversarial networks (GANs) have made breakthroughs for the challenging task of single image super-resolution (SISR)
- Score: 4.622977798361014
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Single Image Super-resolution (SISR) produces high-resolution images with
fine spatial resolutions from aremotely sensed image with low spatial
resolution. Recently, deep learning and generative adversarial networks(GANs)
have made breakthroughs for the challenging task of single image
super-resolution (SISR). However, thegenerated image still suffers from
undesirable artifacts such as, the absence of texture-feature representationand
high-frequency information. We propose a frequency domain-based spatio-temporal
remote sensingsingle image super-resolution technique to reconstruct the HR
image combined with generative adversarialnetworks (GANs) on various frequency
bands (TWIST-GAN). We have introduced a new method incorporatingWavelet
Transform (WT) characteristics and transferred generative adversarial network.
The LR image hasbeen split into various frequency bands by using the WT,
whereas, the transfer generative adversarial networkpredicts high-frequency
components via a proposed architecture. Finally, the inverse transfer of
waveletsproduces a reconstructed image with super-resolution. The model is
first trained on an external DIV2 Kdataset and validated with the UC Merceed
Landsat remote sensing dataset and Set14 with each image sizeof 256x256.
Following that, transferred GANs are used to process spatio-temporal remote
sensing images inorder to minimize computation cost differences and improve
texture information. The findings are comparedqualitatively and qualitatively
with the current state-of-art approaches. In addition, we saved about 43% of
theGPU memory during training and accelerated the execution of our simplified
version by eliminating batchnormalization layers.
Related papers
- Riesz-Quincunx-UNet Variational Auto-Encoder for Satellite Image
Denoising [0.0]
We introduce a hybrid RQUNet-VAE scheme for image and time series decomposition used to reduce noise in satellite imagery.
We also apply our scheme to several applications for multi-band satellite images, including: image denoising, image and time-series decomposition by diffusion and image segmentation.
arXiv Detail & Related papers (2022-08-25T19:51:07Z) - Transformer-based SAR Image Despeckling [53.99620005035804]
We introduce a transformer-based network for SAR image despeckling.
The proposed despeckling network comprises of a transformer-based encoder which allows the network to learn global dependencies between different image regions.
Experiments show that the proposed method achieves significant improvements over traditional and convolutional neural network-based despeckling methods.
arXiv Detail & Related papers (2022-01-23T20:09:01Z) - FreqNet: A Frequency-domain Image Super-Resolution Network with Dicrete
Cosine Transform [16.439669339293747]
Single image super-resolution(SISR) is an ill-posed problem that aims to obtain high-resolution (HR) output from low-resolution (LR) input.
Despite the high peak signal-to-noise ratios(PSNR) results, it is difficult to determine whether the model correctly adds desired high-frequency details.
We propose FreqNet, an intuitive pipeline from the frequency domain perspective, to solve this problem.
arXiv Detail & Related papers (2021-11-21T11:49:12Z) - Spatially-Adaptive Image Restoration using Distortion-Guided Networks [51.89245800461537]
We present a learning-based solution for restoring images suffering from spatially-varying degradations.
We propose SPAIR, a network design that harnesses distortion-localization information and dynamically adjusts to difficult regions in the image.
arXiv Detail & Related papers (2021-08-19T11:02:25Z) - Multi-Attention Generative Adversarial Network for Remote Sensing Image
Super-Resolution [17.04588012373861]
Image super-resolution (SR) methods can generate remote sensing images with high spatial resolution without increasing the cost.
We propose a network based on the generative adversarial network (GAN) to generate high resolution remote sensing images.
arXiv Detail & Related papers (2021-07-14T08:06:19Z) - Deep Unfolded Recovery of Sub-Nyquist Sampled Ultrasound Image [94.42139459221784]
We propose a reconstruction method from sub-Nyquist samples in the time and spatial domain, that is based on unfolding the ISTA algorithm.
Our method allows reducing the number of array elements, sampling rate, and computational time while ensuring high quality imaging performance.
arXiv Detail & Related papers (2021-03-01T19:19:38Z) - Frequency Consistent Adaptation for Real World Super Resolution [64.91914552787668]
We propose a novel Frequency Consistent Adaptation (FCA) that ensures the frequency domain consistency when applying Super-Resolution (SR) methods to the real scene.
We estimate degradation kernels from unsupervised images and generate the corresponding Low-Resolution (LR) images.
Based on the domain-consistent LR-HR pairs, we train easy-implemented Convolutional Neural Network (CNN) SR models.
arXiv Detail & Related papers (2020-12-18T08:25:39Z) - Super-Resolution of Real-World Faces [3.4376560669160394]
Real low-resolution (LR) face images contain degradations which are too varied and complex to be captured by known downsampling kernels.
In this paper, we propose a two module super-resolution network where the feature extractor module extracts robust features from the LR image.
We train a degradation GAN to convert bicubically downsampled clean images to real degraded images, and interpolate between the obtained degraded LR image and its clean LR counterpart.
arXiv Detail & Related papers (2020-11-04T17:25:54Z) - Hyperspectral Image Super-resolution via Deep Progressive Zero-centric
Residual Learning [62.52242684874278]
Cross-modality distribution of spatial and spectral information makes the problem challenging.
We propose a novel textitlightweight deep neural network-based framework, namely PZRes-Net.
Our framework learns a high resolution and textitzero-centric residual image, which contains high-frequency spatial details of the scene.
arXiv Detail & Related papers (2020-06-18T06:32:11Z) - Deep Generative Adversarial Residual Convolutional Networks for
Real-World Super-Resolution [31.934084942626257]
We propose a deep Super-Resolution Residual Convolutional Generative Adversarial Network (SRResCGAN)
It follows the real-world degradation settings by adversarial training the model with pixel-wise supervision in the HR domain from its generated LR counterpart.
The proposed network exploits the residual learning by minimizing the energy-based objective function with powerful image regularization and convex optimization techniques.
arXiv Detail & Related papers (2020-05-03T00:12:38Z) - Learning Enriched Features for Real Image Restoration and Enhancement [166.17296369600774]
convolutional neural networks (CNNs) have achieved dramatic improvements over conventional approaches for image restoration task.
We present a novel architecture with the collective goals of maintaining spatially-precise high-resolution representations through the entire network.
Our approach learns an enriched set of features that combines contextual information from multiple scales, while simultaneously preserving the high-resolution spatial details.
arXiv Detail & Related papers (2020-03-15T11:04:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.