DePF: A Novel Fusion Approach based on Decomposition Pooling for
Infrared and Visible Images
- URL: http://arxiv.org/abs/2305.17376v2
- Date: Tue, 4 Jul 2023 15:23:24 GMT
- Title: DePF: A Novel Fusion Approach based on Decomposition Pooling for
Infrared and Visible Images
- Authors: Hui Li, Yongbiao Xiao, Chunyang Cheng, Zhongwei Shen, Xiaoning Song
- Abstract summary: A novel fusion network based on the decomposition pooling (de-pooling) manner is proposed, termed as DePF.
A de-pooling based encoder is designed to extract multi-scale image and detail features of source images at the same time.
The experimental results demonstrate that the proposed method exhibits superior fusion performance over the state-of-the-arts.
- Score: 7.11574718614606
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Infrared and visible image fusion aims to generate synthetic images
simultaneously containing salient features and rich texture details, which can
be used to boost downstream tasks. However, existing fusion methods are
suffering from the issues of texture loss and edge information deficiency,
which result in suboptimal fusion results. Meanwhile, the straight-forward
up-sampling operator can not well preserve the source information from
multi-scale features. To address these issues, a novel fusion network based on
the decomposition pooling (de-pooling) manner is proposed, termed as DePF.
Specifically, a de-pooling based encoder is designed to extract multi-scale
image and detail features of source images at the same time. In addition, the
spatial attention model is used to aggregate these salient features. After
that, the fused features will be reconstructed by the decoder, in which the
up-sampling operator is replaced by the de-pooling reversed operation.
Different from the common max-pooling technique, image features after the
de-pooling layer can retain abundant details information, which is benefit to
the fusion process. In this case, rich texture information and multi-scale
information are maintained during the reconstruction phase. The experimental
results demonstrate that the proposed method exhibits superior fusion
performance over the state-of-the-arts on multiple image fusion benchmarks.
Related papers
- Fusion from Decomposition: A Self-Supervised Approach for Image Fusion and Beyond [74.96466744512992]
The essence of image fusion is to integrate complementary information from source images.
DeFusion++ produces versatile fused representations that can enhance the quality of image fusion and the effectiveness of downstream high-level vision tasks.
arXiv Detail & Related papers (2024-10-16T06:28:49Z) - Diffusion Hyperfeatures: Searching Through Time and Space for Semantic Correspondence [88.00004819064672]
Diffusion Hyperfeatures is a framework for consolidating multi-scale and multi-timestep feature maps into per-pixel feature descriptors.
Our method achieves superior performance on the SPair-71k real image benchmark.
arXiv Detail & Related papers (2023-05-23T17:58:05Z) - LRRNet: A Novel Representation Learning Guided Fusion Network for
Infrared and Visible Images [98.36300655482196]
We formulate the fusion task mathematically, and establish a connection between its optimal solution and the network architecture that can implement it.
In particular we adopt a learnable representation approach to the fusion task, in which the construction of the fusion network architecture is guided by the optimisation algorithm producing the learnable model.
Based on this novel network architecture, an end-to-end lightweight fusion network is constructed to fuse infrared and visible light images.
arXiv Detail & Related papers (2023-04-11T12:11:23Z) - CDDFuse: Correlation-Driven Dual-Branch Feature Decomposition for
Multi-Modality Image Fusion [138.40422469153145]
We propose a novel Correlation-Driven feature Decomposition Fusion (CDDFuse) network.
We show that CDDFuse achieves promising results in multiple fusion tasks, including infrared-visible image fusion and medical image fusion.
arXiv Detail & Related papers (2022-11-26T02:40:28Z) - CoCoNet: Coupled Contrastive Learning Network with Multi-level Feature
Ensemble for Multi-modality Image Fusion [72.8898811120795]
We propose a coupled contrastive learning network, dubbed CoCoNet, to realize infrared and visible image fusion.
Our method achieves state-of-the-art (SOTA) performance under both subjective and objective evaluation.
arXiv Detail & Related papers (2022-11-20T12:02:07Z) - Learning Enriched Features for Fast Image Restoration and Enhancement [166.17296369600774]
This paper presents a holistic goal of maintaining spatially-precise high-resolution representations through the entire network.
We learn an enriched set of features that combines contextual information from multiple scales, while simultaneously preserving the high-resolution spatial details.
Our approach achieves state-of-the-art results for a variety of image processing tasks, including defocus deblurring, image denoising, super-resolution, and image enhancement.
arXiv Detail & Related papers (2022-04-19T17:59:45Z) - A Deep Decomposition Network for Image Processing: A Case Study for
Visible and Infrared Image Fusion [38.17268441062239]
We propose a new image decomposition method based on convolutional neural network.
We input infrared image and visible light image and decompose them into three high-frequency feature images and a low-frequency feature image respectively.
The two sets of feature images are fused using a specific fusion strategy to obtain fusion feature images.
arXiv Detail & Related papers (2021-02-21T06:34:33Z) - When Image Decomposition Meets Deep Learning: A Novel Infrared and
Visible Image Fusion Method [27.507158159317417]
Infrared and visible image fusion is a hot topic in image processing and image enhancement.
We propose a novel dual-stream auto-encoder based fusion network.
arXiv Detail & Related papers (2020-09-02T19:32:28Z) - DIDFuse: Deep Image Decomposition for Infrared and Visible Image Fusion [28.7553352357059]
This paper proposes a novel auto-encoder based fusion network.
The encoder decomposes an image into background and detail feature maps with low- and high-frequency information, respectively.
In the test phase, background and detail feature maps are respectively merged via a fusion module, and the fused image robustness is recovered by the decoder.
arXiv Detail & Related papers (2020-03-20T11:45:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.