When Image Decomposition Meets Deep Learning: A Novel Infrared and
Visible Image Fusion Method
- URL: http://arxiv.org/abs/2009.01315v2
- Date: Wed, 14 Apr 2021 12:24:44 GMT
- Title: When Image Decomposition Meets Deep Learning: A Novel Infrared and
Visible Image Fusion Method
- Authors: Zixiang Zhao, Jiangshe Zhang, Shuang Xu, Kai Sun, Chunxia Zhang,
Junmin Liu
- Abstract summary: Infrared and visible image fusion is a hot topic in image processing and image enhancement.
We propose a novel dual-stream auto-encoder based fusion network.
- Score: 27.507158159317417
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Infrared and visible image fusion, as a hot topic in image processing and
image enhancement, aims to produce fused images retaining the detail texture
information in visible images and the thermal radiation information in infrared
images. A critical step for this issue is to decompose features in different
scales and to merge them separately. In this paper, we propose a novel
dual-stream auto-encoder (AE) based fusion network. The core idea is that the
encoder decomposes an image into base and detail feature maps with low- and
high-frequency information, respectively, and that the decoder is responsible
for the original image reconstruction. To this end, a well-designed loss
function is established to make the base/detail feature maps
similar/dissimilar. In the test phase, base and detail feature maps are
respectively merged via an additional fusion layer, which contains a saliency
weighted-based spatial attention module and a channel attention module to
adaptively preserve more information from source images and to highlight the
objects. Then the fused image is recovered by the decoder. Qualitative and
quantitative results demonstrate that our method can generate fusion images
containing highlighted targets and abundant detail texture information with
strong reproducibility and meanwhile is superior to the state-of-the-art (SOTA)
approaches.
Related papers
- Fusion from Decomposition: A Self-Supervised Approach for Image Fusion and Beyond [74.96466744512992]
The essence of image fusion is to integrate complementary information from source images.
DeFusion++ produces versatile fused representations that can enhance the quality of image fusion and the effectiveness of downstream high-level vision tasks.
arXiv Detail & Related papers (2024-10-16T06:28:49Z) - A Multi-scale Information Integration Framework for Infrared and Visible Image Fusion [46.545365049713105]
Infrared and visible image fusion aims at generating a fused image containing intensity and detail information of source images.
Existing methods mostly adopt a simple weight in the loss function to decide the information retention of each modality.
We propose a multi-scale dual attention (MDA) framework for infrared and visible image fusion.
arXiv Detail & Related papers (2023-12-07T14:40:05Z) - DePF: A Novel Fusion Approach based on Decomposition Pooling for
Infrared and Visible Images [7.11574718614606]
A novel fusion network based on the decomposition pooling (de-pooling) manner is proposed, termed as DePF.
A de-pooling based encoder is designed to extract multi-scale image and detail features of source images at the same time.
The experimental results demonstrate that the proposed method exhibits superior fusion performance over the state-of-the-arts.
arXiv Detail & Related papers (2023-05-27T05:47:14Z) - CDDFuse: Correlation-Driven Dual-Branch Feature Decomposition for
Multi-Modality Image Fusion [138.40422469153145]
We propose a novel Correlation-Driven feature Decomposition Fusion (CDDFuse) network.
We show that CDDFuse achieves promising results in multiple fusion tasks, including infrared-visible image fusion and medical image fusion.
arXiv Detail & Related papers (2022-11-26T02:40:28Z) - CoCoNet: Coupled Contrastive Learning Network with Multi-level Feature
Ensemble for Multi-modality Image Fusion [72.8898811120795]
We propose a coupled contrastive learning network, dubbed CoCoNet, to realize infrared and visible image fusion.
Our method achieves state-of-the-art (SOTA) performance under both subjective and objective evaluation.
arXiv Detail & Related papers (2022-11-20T12:02:07Z) - Interactive Feature Embedding for Infrared and Visible Image Fusion [94.77188069479155]
General deep learning-based methods for infrared and visible image fusion rely on the unsupervised mechanism for vital information retention.
We propose a novel interactive feature embedding in self-supervised learning framework for infrared and visible image fusion.
arXiv Detail & Related papers (2022-11-09T13:34:42Z) - Learning Enriched Features for Fast Image Restoration and Enhancement [166.17296369600774]
This paper presents a holistic goal of maintaining spatially-precise high-resolution representations through the entire network.
We learn an enriched set of features that combines contextual information from multiple scales, while simultaneously preserving the high-resolution spatial details.
Our approach achieves state-of-the-art results for a variety of image processing tasks, including defocus deblurring, image denoising, super-resolution, and image enhancement.
arXiv Detail & Related papers (2022-04-19T17:59:45Z) - A Deep Decomposition Network for Image Processing: A Case Study for
Visible and Infrared Image Fusion [38.17268441062239]
We propose a new image decomposition method based on convolutional neural network.
We input infrared image and visible light image and decompose them into three high-frequency feature images and a low-frequency feature image respectively.
The two sets of feature images are fused using a specific fusion strategy to obtain fusion feature images.
arXiv Detail & Related papers (2021-02-21T06:34:33Z) - DIDFuse: Deep Image Decomposition for Infrared and Visible Image Fusion [28.7553352357059]
This paper proposes a novel auto-encoder based fusion network.
The encoder decomposes an image into background and detail feature maps with low- and high-frequency information, respectively.
In the test phase, background and detail feature maps are respectively merged via a fusion module, and the fused image robustness is recovered by the decoder.
arXiv Detail & Related papers (2020-03-20T11:45:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.