Learning deep multiresolution representations for pansharpening
- URL: http://arxiv.org/abs/2102.08423v1
- Date: Tue, 16 Feb 2021 19:41:57 GMT
- Title: Learning deep multiresolution representations for pansharpening
- Authors: Hannan Adeel and Syed Sohaib Ali and Muhammad Mohsin Riaz and Syed
Abdul Mannan Kirmani and Muhammad Imran Qureshi and Junaid Imtiaz
- Abstract summary: This paper proposes a pyramid based deep fusion framework that preserves spectral and spatial characteristics at different scales.
Experiments suggest that the proposed architecture outperforms state of the art pansharpening models.
- Score: 4.469255274378329
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Retaining spatial characteristics of panchromatic image and spectral
information of multispectral bands is a critical issue in pansharpening. This
paper proposes a pyramid based deep fusion framework that preserves spectral
and spatial characteristics at different scales. The spectral information is
preserved by passing the corresponding low resolution multispectral image as
residual component of the network at each scale. The spatial information is
preserved by training the network at each scale with the high frequencies of
panchromatic image alongside the corresponding low resolution multispectral
image. The parameters of different networks are shared across the pyramid in
order to add spatial details consistently across scales. The parameters are
also shared across fusion layers within a network at a specific scale.
Experiments suggest that the proposed architecture outperforms state of the art
pansharpening models. The proposed model, code and dataset is publicly
available at https://github.com/sohaibali01/deep_pyramid_fusion.
Related papers
- Multi-Head Attention Residual Unfolded Network for Model-Based Pansharpening [2.874893537471256]
Unfolding fusion methods integrate the powerful representation capabilities of deep learning with the robustness of model-based approaches.
In this paper, we propose a model-based deep unfolded method for satellite image fusion.
Experimental results on PRISMA, Quickbird, and WorldView2 datasets demonstrate the superior performance of our method.
arXiv Detail & Related papers (2024-09-04T13:05:00Z) - PanBench: Towards High-Resolution and High-Performance Pansharpening [16.16122045172545]
Pansharpening involves integrating low-resolution multispectral images with high-resolution panchromatic images to synthesize an image that is both high-resolution and retains multispectral information.
This paper introduces PanBench, a high-resolution multi-scene dataset containing all mainstream satellites.
To achieve high-fidelity synthesis, we propose a Cascaded Multiscale Fusion Network (CMFNet) for Pansharpening.
arXiv Detail & Related papers (2023-11-20T10:57:23Z) - Scale-MAE: A Scale-Aware Masked Autoencoder for Multiscale Geospatial
Representation Learning [55.762840052788945]
We present Scale-MAE, a pretraining method that explicitly learns relationships between data at different, known scales.
We find that tasking the network with reconstructing both low/high frequency images leads to robust multiscale representations for remote sensing imagery.
arXiv Detail & Related papers (2022-12-30T03:15:34Z) - PC-GANs: Progressive Compensation Generative Adversarial Networks for
Pan-sharpening [50.943080184828524]
We propose a novel two-step model for pan-sharpening that sharpens the MS image through the progressive compensation of the spatial and spectral information.
The whole model is composed of triple GANs, and based on the specific architecture, a joint compensation loss function is designed to enable the triple GANs to be trained simultaneously.
arXiv Detail & Related papers (2022-07-29T03:09:21Z) - Deep dual stream residual network with contextual attention for
pansharpening of remote sensing images [2.210012031884757]
We present a novel dual attention-based two-stream network.
It starts with feature extraction using two separate networks for both images, an encoder with attention mechanism to recalibrate the extracted features.
This is followed by fusion of the features forming a compact representation fed into an image reconstruction network to produce a pansharpened image.
arXiv Detail & Related papers (2022-07-25T09:28:11Z) - Multiscale Analysis for Improving Texture Classification [62.226224120400026]
This paper employs the Gaussian-Laplacian pyramid to treat different spatial frequency bands of a texture separately.
We aggregate features extracted from gray and color texture images using bio-inspired texture descriptors, information-theoretic measures, gray-level co-occurrence matrix features, and Haralick statistical features into a single feature vector.
arXiv Detail & Related papers (2022-04-21T01:32:22Z) - GraphFPN: Graph Feature Pyramid Network for Object Detection [44.481481251032264]
We propose graph feature pyramid networks that are capable of adapting their topological structures to varying intrinsic image structures.
The proposed graph feature pyramid network can enhance the multiscale features from a convolutional feature pyramid network.
We evaluate our graph feature pyramid network in the object detection task by integrating it into the Faster R-CNN algorithm.
arXiv Detail & Related papers (2021-08-02T01:19:38Z) - SpectralFormer: Rethinking Hyperspectral Image Classification with
Transformers [91.09957836250209]
Hyperspectral (HS) images are characterized by approximately contiguous spectral information.
CNNs have been proven to be a powerful feature extractor in HS image classification.
We propose a novel backbone network called ulSpectralFormer for HS image classification.
arXiv Detail & Related papers (2021-07-07T02:59:21Z) - NeuralFusion: Online Depth Fusion in Latent Space [77.59420353185355]
We present a novel online depth map fusion approach that learns depth map aggregation in a latent feature space.
Our approach is real-time capable, handles high noise levels, and is particularly able to deal with gross outliers common for photometric stereo-based depth maps.
arXiv Detail & Related papers (2020-11-30T13:50:59Z) - Adaptive Context-Aware Multi-Modal Network for Depth Completion [107.15344488719322]
We propose to adopt the graph propagation to capture the observed spatial contexts.
We then apply the attention mechanism on the propagation, which encourages the network to model the contextual information adaptively.
Finally, we introduce the symmetric gated fusion strategy to exploit the extracted multi-modal features effectively.
Our model, named Adaptive Context-Aware Multi-Modal Network (ACMNet), achieves the state-of-the-art performance on two benchmarks.
arXiv Detail & Related papers (2020-08-25T06:00:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.