An Integrated Framework for the Heterogeneous Spatio-Spectral-Temporal
Fusion of Remote Sensing Images
- URL: http://arxiv.org/abs/2109.00400v1
- Date: Wed, 1 Sep 2021 14:29:23 GMT
- Title: An Integrated Framework for the Heterogeneous Spatio-Spectral-Temporal
Fusion of Remote Sensing Images
- Authors: Menghui Jiang, Huanfeng Shen, Jie Li, Liangpei Zhang
- Abstract summary: This paper first proposes a heterogeneous-integrated framework based on a novel residual residual cycle.
The proposed network can effectively fuse not only homogeneous but also heterogeneous information.
For the first time, a heterogeneous-integrated fusion framework is proposed to simultaneously merge the complementary heterogeneous spatial, spectral and temporal information.
- Score: 22.72006711045537
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Image fusion technology is widely used to fuse the complementary information
between multi-source remote sensing images. Inspired by the frontier of deep
learning, this paper first proposes a heterogeneous-integrated framework based
on a novel deep residual cycle GAN. The proposed network consists of a forward
fusion part and a backward degeneration feedback part. The forward part
generates the desired fusion result from the various observations; the backward
degeneration feedback part considers the imaging degradation process and
regenerates the observations inversely from the fusion result. The proposed
network can effectively fuse not only the homogeneous but also the
heterogeneous information. In addition, for the first time, a
heterogeneous-integrated fusion framework is proposed to simultaneously merge
the complementary heterogeneous spatial, spectral and temporal information of
multi-source heterogeneous observations. The proposed heterogeneous-integrated
framework also provides a uniform mode that can complete various fusion tasks,
including heterogeneous spatio-spectral fusion, spatio-temporal fusion, and
heterogeneous spatio-spectral-temporal fusion. Experiments are conducted for
two challenging scenarios of land cover changes and thick cloud coverage.
Images from many remote sensing satellites, including MODIS, Landsat-8,
Sentinel-1, and Sentinel-2, are utilized in the experiments. Both qualitative
and quantitative evaluations confirm the effectiveness of the proposed method.
Related papers
- MMA-UNet: A Multi-Modal Asymmetric UNet Architecture for Infrared and Visible Image Fusion [4.788349093716269]
Multi-modal image fusion (MMIF) maps useful information from various modalities into the same representation space.
The existing fusion algorithms tend to symmetrically fuse the multi-modal images, causing the loss of shallow information or bias towards a single modality.
In this study, we analyzed the spatial distribution differences of information in different modalities and proved that encoding features within the same network is not conducive to achieving simultaneous deep feature space alignment.
arXiv Detail & Related papers (2024-04-27T01:35:21Z) - Physics-Inspired Degradation Models for Hyperspectral Image Fusion [61.743696362028246]
Most fusion methods solely focus on the fusion algorithm itself and overlook the degradation models.
We propose physics-inspired degradation models (PIDM) to model the degradation of LR-HSI and HR-MSI.
Our proposed PIDM can boost the fusion performance of existing fusion methods in practical scenarios.
arXiv Detail & Related papers (2024-02-04T09:07:28Z) - Decomposition-based and Interference Perception for Infrared and Visible
Image Fusion in Complex Scenes [4.919706769234434]
We propose a decomposition-based and interference perception image fusion method.
We classify the pixels of visible image from the degree of scattering of light transmission, based on which we then separate the detail and energy information of the image.
This refined decomposition facilitates the proposed model in identifying more interfering pixels that are in complex scenes.
arXiv Detail & Related papers (2024-02-03T09:27:33Z) - A Dual Domain Multi-exposure Image Fusion Network based on the
Spatial-Frequency Integration [57.14745782076976]
Multi-exposure image fusion aims to generate a single high-dynamic image by integrating images with different exposures.
We propose a novelty perspective on multi-exposure image fusion via the Spatial-Frequency Integration Framework, named MEF-SFI.
Our method achieves visual-appealing fusion results against state-of-the-art multi-exposure image fusion approaches.
arXiv Detail & Related papers (2023-12-17T04:45:15Z) - DiAD: A Diffusion-based Framework for Multi-class Anomaly Detection [55.48770333927732]
We propose a Difusion-based Anomaly Detection (DiAD) framework for multi-class anomaly detection.
It consists of a pixel-space autoencoder, a latent-space Semantic-Guided (SG) network with a connection to the stable diffusion's denoising network, and a feature-space pre-trained feature extractor.
Experiments on MVTec-AD and VisA datasets demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2023-12-11T18:38:28Z) - UniFusion: Unified Multi-view Fusion Transformer for Spatial-Temporal
Representation in Bird's-Eye-View [20.169308746548587]
We propose a new method that unifies both spatial and temporal fusion and merges them into a unified mathematical formulation.
With the proposed unified spatial-temporal fusion, our method could support long-range fusion.
Our method gains the state-of-the-art performance in the map segmentation task.
arXiv Detail & Related papers (2022-07-18T11:59:10Z) - Target-aware Dual Adversarial Learning and a Multi-scenario
Multi-Modality Benchmark to Fuse Infrared and Visible for Object Detection [65.30079184700755]
This study addresses the issue of fusing infrared and visible images that appear differently for object detection.
Previous approaches discover commons underlying the two modalities and fuse upon the common space either by iterative optimization or deep networks.
This paper proposes a bilevel optimization formulation for the joint problem of fusion and detection, and then unrolls to a target-aware Dual Adversarial Learning (TarDAL) network for fusion and a commonly used detection network.
arXiv Detail & Related papers (2022-03-30T11:44:56Z) - Exploring Separable Attention for Multi-Contrast MR Image
Super-Resolution [88.16655157395785]
We propose a separable attention network (comprising a priority attention and background separation attention) named SANet.
It can explore the foreground and background areas in the forward and reverse directions with the help of the auxiliary contrast.
It is the first model to explore a separable attention mechanism that uses the auxiliary contrast to predict the foreground and background regions.
arXiv Detail & Related papers (2021-09-03T05:53:07Z) - Subspace-Based Feature Fusion From Hyperspectral And Multispectral Image
For Land Cover Classification [17.705966155216945]
A feature fusion method from hyperspectral (HS) and multispectral (MS) images for pixel-based classification is proposed.
The proposed method first extracts spatial features from the MS image using morphological profiles.
An algorithm based on combining alternating optimization (AO) and the alternating direction method of multipliers (ADMM) is developed to solve efficiently the feature fusion problem.
arXiv Detail & Related papers (2021-02-22T17:59:18Z) - Hyperspectral-Multispectral Image Fusion with Weighted LASSO [68.04032419397677]
We propose an approach for fusing hyperspectral and multispectral images to provide high-quality hyperspectral output.
We demonstrate that the proposed sparse fusion and reconstruction provides quantitatively superior results when compared to existing methods on publicly available images.
arXiv Detail & Related papers (2020-03-15T23:07:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.