Image Fusion in Remote Sensing: An Overview and Meta Analysis
- URL: http://arxiv.org/abs/2401.08837v1
- Date: Tue, 16 Jan 2024 21:21:17 GMT
- Title: Image Fusion in Remote Sensing: An Overview and Meta Analysis
- Authors: Hessah Albanwan, Rongjun Qin, Yang Tang
- Abstract summary: Image fusion in Remote Sensing (RS) has been a consistent demand due to its ability to turn raw images of different resolutions, sources, and modalities into accurate, complete, and coherent images.
Yet, image fusion solutions are highly disparate to various remote sensing problems and thus are often narrowly defined in existing reviews as topical applications.
This paper comprehensively surveying relevant works with a simple taxonomy: 1) many-to-one image fusion; 2) many-to-many image fusion.
- Score: 12.500746892824338
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Image fusion in Remote Sensing (RS) has been a consistent demand due to its
ability to turn raw images of different resolutions, sources, and modalities
into accurate, complete, and spatio-temporally coherent images. It greatly
facilitates downstream applications such as pan-sharpening, change detection,
land-cover classification, etc. Yet, image fusion solutions are highly
disparate to various remote sensing problems and thus are often narrowly
defined in existing reviews as topical applications, such as pan-sharpening,
and spatial-temporal image fusion. Considering that image fusion can be
theoretically applied to any gridded data through pixel-level operations, in
this paper, we expanded its scope by comprehensively surveying relevant works
with a simple taxonomy: 1) many-to-one image fusion; 2) many-to-many image
fusion. This simple taxonomy defines image fusion as a mapping problem that
turns either a single or a set of images into another single or set of images,
depending on the desired coherence, e.g., spectral, spatial/resolution
coherence, etc. We show that this simple taxonomy, despite the significant
modality difference it covers, can be presented by a conceptually easy
framework. In addition, we provide a meta-analysis to review the major papers
studying the various types of image fusion and their applications over the
years (from the 1980s to date), covering 5,926 peer-reviewed papers. Finally,
we discuss the main benefits and emerging challenges to provide open research
directions and potential future works.
Related papers
- Fusion from Decomposition: A Self-Supervised Approach for Image Fusion and Beyond [74.96466744512992]
The essence of image fusion is to integrate complementary information from source images.
DeFusion++ produces versatile fused representations that can enhance the quality of image fusion and the effectiveness of downstream high-level vision tasks.
arXiv Detail & Related papers (2024-10-16T06:28:49Z) - FusionMamba: Efficient Image Fusion with State Space Model [35.57157248152558]
Image fusion aims to generate a high-resolution multi/hyper-spectral image with limited spectral information and a low-resolution image with abundant spectral data.
Current deep learning (DL)-based methods for image fusion rely on CNNs or Transformers to extract features and merge different types of data.
We propose FusionMamba, an innovative method for efficient image fusion.
arXiv Detail & Related papers (2024-04-11T17:29:56Z) - From Text to Pixels: A Context-Aware Semantic Synergy Solution for
Infrared and Visible Image Fusion [66.33467192279514]
We introduce a text-guided multi-modality image fusion method that leverages the high-level semantics from textual descriptions to integrate semantics from infrared and visible images.
Our method not only produces visually superior fusion results but also achieves a higher detection mAP over existing methods, achieving state-of-the-art results.
arXiv Detail & Related papers (2023-12-31T08:13:47Z) - A Task-guided, Implicitly-searched and Meta-initialized Deep Model for
Image Fusion [69.10255211811007]
We present a Task-guided, Implicit-searched and Meta- generalizationd (TIM) deep model to address the image fusion problem in a challenging real-world scenario.
Specifically, we propose a constrained strategy to incorporate information from downstream tasks to guide the unsupervised learning process of image fusion.
Within this framework, we then design an implicit search scheme to automatically discover compact architectures for our fusion model with high efficiency.
arXiv Detail & Related papers (2023-05-25T08:54:08Z) - Equivariant Multi-Modality Image Fusion [124.11300001864579]
We propose the Equivariant Multi-Modality imAge fusion paradigm for end-to-end self-supervised learning.
Our approach is rooted in the prior knowledge that natural imaging responses are equivariant to certain transformations.
Experiments confirm that EMMA yields high-quality fusion results for infrared-visible and medical images.
arXiv Detail & Related papers (2023-05-19T05:50:24Z) - CoCoNet: Coupled Contrastive Learning Network with Multi-level Feature
Ensemble for Multi-modality Image Fusion [72.8898811120795]
We propose a coupled contrastive learning network, dubbed CoCoNet, to realize infrared and visible image fusion.
Our method achieves state-of-the-art (SOTA) performance under both subjective and objective evaluation.
arXiv Detail & Related papers (2022-11-20T12:02:07Z) - Unsupervised Image Fusion Method based on Feature Mutual Mapping [16.64607158983448]
We propose an unsupervised adaptive image fusion method to address the above issues.
We construct a global map to measure the connections of pixels between the input source images.
Our method achieves superior performance in both visual perception and objective evaluation.
arXiv Detail & Related papers (2022-01-25T07:50:14Z) - Bridging Composite and Real: Towards End-to-end Deep Image Matting [88.79857806542006]
We study the roles of semantics and details for image matting.
We propose a novel Glance and Focus Matting network (GFM), which employs a shared encoder and two separate decoders.
Comprehensive empirical studies have demonstrated that GFM outperforms state-of-the-art methods.
arXiv Detail & Related papers (2020-10-30T10:57:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.