High Quality Remote Sensing Image Super-Resolution Using Deep Memory
Connected Network
- URL: http://arxiv.org/abs/2010.00472v1
- Date: Thu, 1 Oct 2020 15:06:02 GMT
- Title: High Quality Remote Sensing Image Super-Resolution Using Deep Memory
Connected Network
- Authors: Wenjia Xu, Guangluan Xu, Yang Wang, Xian Sun, Daoyu Lin, Yirong Wu
- Abstract summary: Single image super-resolution is crucial for many applications such as target detection and image classification.
We propose a novel method named deep memory connected network (DMCN) based on a convolutional neural network to reconstruct high-quality super-resolution images.
- Score: 21.977093907114217
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Single image super-resolution is an effective way to enhance the spatial
resolution of remote sensing image, which is crucial for many applications such
as target detection and image classification. However, existing methods based
on the neural network usually have small receptive fields and ignore the image
detail. We propose a novel method named deep memory connected network (DMCN)
based on a convolutional neural network to reconstruct high-quality
super-resolution images. We build local and global memory connections to
combine image detail with environmental information. To further reduce
parameters and ease time-consuming, we propose downsampling units, shrinking
the spatial size of feature maps. We test DMCN on three remote sensing datasets
with different spatial resolution. Experimental results indicate that our
method yields promising improvements in both accuracy and visual performance
over the current state-of-the-art.
Related papers
- Learning Enriched Features for Fast Image Restoration and Enhancement [166.17296369600774]
This paper presents a holistic goal of maintaining spatially-precise high-resolution representations through the entire network.
We learn an enriched set of features that combines contextual information from multiple scales, while simultaneously preserving the high-resolution spatial details.
Our approach achieves state-of-the-art results for a variety of image processing tasks, including defocus deblurring, image denoising, super-resolution, and image enhancement.
arXiv Detail & Related papers (2022-04-19T17:59:45Z) - Fusformer: A Transformer-based Fusion Approach for Hyperspectral Image
Super-resolution [9.022005574190182]
We design a network based on the transformer for fusing the low-resolution hyperspectral images and high-resolution multispectral images.
Considering the LR-HSIs hold the main spectral structure, the network focuses on the spatial detail estimation.
Various experiments and quality indexes show our approach's superiority compared with other state-of-the-art methods.
arXiv Detail & Related papers (2021-09-05T14:00:34Z) - High-resolution Depth Maps Imaging via Attention-based Hierarchical
Multi-modal Fusion [84.24973877109181]
We propose a novel attention-based hierarchical multi-modal fusion network for guided DSR.
We show that our approach outperforms state-of-the-art methods in terms of reconstruction accuracy, running speed and memory efficiency.
arXiv Detail & Related papers (2021-04-04T03:28:33Z) - A Parallel Down-Up Fusion Network for Salient Object Detection in
Optical Remote Sensing Images [82.87122287748791]
We propose a novel Parallel Down-up Fusion network (PDF-Net) for salient object detection in optical remote sensing images (RSIs)
It takes full advantage of the in-path low- and high-level features and cross-path multi-resolution features to distinguish diversely scaled salient objects and suppress the cluttered backgrounds.
Experiments on the ORSSD dataset demonstrate that the proposed network is superior to the state-of-the-art approaches both qualitatively and quantitatively.
arXiv Detail & Related papers (2020-10-02T05:27:57Z) - Real Image Super Resolution Via Heterogeneous Model Ensemble using
GP-NAS [63.48801313087118]
We propose a new method for image superresolution using deep residual network with dense skip connections.
The proposed method won the first place in all three tracks of the AIM 2020 Real Image Super-Resolution Challenge.
arXiv Detail & Related papers (2020-09-02T22:33:23Z) - Multi-image Super Resolution of Remotely Sensed Images using Residual
Feature Attention Deep Neural Networks [1.3764085113103222]
The presented research proposes a novel residual attention model (RAMS) that efficiently tackles the multi-image super-resolution task.
We introduce the mechanism of visual feature attention with 3D convolutions in order to obtain an aware data fusion and information extraction.
Our representation learning network makes extensive use of nestled residual connections to let flow redundant low-frequency signals.
arXiv Detail & Related papers (2020-07-06T22:54:02Z) - Integrating global spatial features in CNN based Hyperspectral/SAR
imagery classification [11.399460655843496]
This paper proposes a novel method to take into the information of remote sensing image, i.e., geographic latitude-longitude information.
A dual-branch convolutional neural network (CNN) classification method is designed in combination with the global information to mine the pixel features of the image.
Two remote sensing images are used to verify the effectiveness of our method, including hyperspectral imaging (HSI) and polarimetric synthetic aperture radar (PolSAR) imagery.
arXiv Detail & Related papers (2020-05-30T10:00:10Z) - Learning Enriched Features for Real Image Restoration and Enhancement [166.17296369600774]
convolutional neural networks (CNNs) have achieved dramatic improvements over conventional approaches for image restoration task.
We present a novel architecture with the collective goals of maintaining spatially-precise high-resolution representations through the entire network.
Our approach learns an enriched set of features that combines contextual information from multiple scales, while simultaneously preserving the high-resolution spatial details.
arXiv Detail & Related papers (2020-03-15T11:04:30Z) - Spatial-Spectral Residual Network for Hyperspectral Image
Super-Resolution [82.1739023587565]
We propose a novel spectral-spatial residual network for hyperspectral image super-resolution (SSRNet)
Our method can effectively explore spatial-spectral information by using 3D convolution instead of 2D convolution, which enables the network to better extract potential information.
In each unit, we employ spatial and temporal separable 3D convolution to extract spatial and spectral information, which not only reduces unaffordable memory usage and high computational cost, but also makes the network easier to train.
arXiv Detail & Related papers (2020-01-14T03:34:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.