A Deep Decomposition Network for Image Processing: A Case Study for
Visible and Infrared Image Fusion
- URL: http://arxiv.org/abs/2102.10526v1
- Date: Sun, 21 Feb 2021 06:34:33 GMT
- Title: A Deep Decomposition Network for Image Processing: A Case Study for
Visible and Infrared Image Fusion
- Authors: Yu Fu, Xiao-Jun Wu, Josef Kittler
- Abstract summary: We propose a new image decomposition method based on convolutional neural network.
We input infrared image and visible light image and decompose them into three high-frequency feature images and a low-frequency feature image respectively.
The two sets of feature images are fused using a specific fusion strategy to obtain fusion feature images.
- Score: 38.17268441062239
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Image decomposition is a crucial subject in the field of image processing. It
can extract salient features from the source image. We propose a new image
decomposition method based on convolutional neural network. This method can be
applied to many image processing tasks. In this paper, we apply the image
decomposition network to the image fusion task. We input infrared image and
visible light image and decompose them into three high-frequency feature images
and a low-frequency feature image respectively. The two sets of feature images
are fused using a specific fusion strategy to obtain fusion feature images.
Finally, the feature images are reconstructed to obtain the fused image.
Compared with the state-of-the-art fusion methods, this method has achieved
better performance in both subjective and objective evaluation.
Related papers
- Fusion from Decomposition: A Self-Supervised Approach for Image Fusion and Beyond [74.96466744512992]
The essence of image fusion is to integrate complementary information from source images.
DeFusion++ produces versatile fused representations that can enhance the quality of image fusion and the effectiveness of downstream high-level vision tasks.
arXiv Detail & Related papers (2024-10-16T06:28:49Z) - A Multi-scale Information Integration Framework for Infrared and Visible
Image Fusion [50.84746752058516]
Infrared and visible image fusion aims at generating a fused image containing intensity and detail information of source images.
Existing methods mostly adopt a simple weight in the loss function to decide the information retention of each modality.
We propose a multi-scale dual attention (MDA) framework for infrared and visible image fusion.
arXiv Detail & Related papers (2023-12-07T14:40:05Z) - DePF: A Novel Fusion Approach based on Decomposition Pooling for
Infrared and Visible Images [7.11574718614606]
A novel fusion network based on the decomposition pooling (de-pooling) manner is proposed, termed as DePF.
A de-pooling based encoder is designed to extract multi-scale image and detail features of source images at the same time.
The experimental results demonstrate that the proposed method exhibits superior fusion performance over the state-of-the-arts.
arXiv Detail & Related papers (2023-05-27T05:47:14Z) - LRRNet: A Novel Representation Learning Guided Fusion Network for
Infrared and Visible Images [98.36300655482196]
We formulate the fusion task mathematically, and establish a connection between its optimal solution and the network architecture that can implement it.
In particular we adopt a learnable representation approach to the fusion task, in which the construction of the fusion network architecture is guided by the optimisation algorithm producing the learnable model.
Based on this novel network architecture, an end-to-end lightweight fusion network is constructed to fuse infrared and visible light images.
arXiv Detail & Related papers (2023-04-11T12:11:23Z) - CoCoNet: Coupled Contrastive Learning Network with Multi-level Feature
Ensemble for Multi-modality Image Fusion [72.8898811120795]
We propose a coupled contrastive learning network, dubbed CoCoNet, to realize infrared and visible image fusion.
Our method achieves state-of-the-art (SOTA) performance under both subjective and objective evaluation.
arXiv Detail & Related papers (2022-11-20T12:02:07Z) - Unsupervised Image Fusion Method based on Feature Mutual Mapping [16.64607158983448]
We propose an unsupervised adaptive image fusion method to address the above issues.
We construct a global map to measure the connections of pixels between the input source images.
Our method achieves superior performance in both visual perception and objective evaluation.
arXiv Detail & Related papers (2022-01-25T07:50:14Z) - UFA-FUSE: A novel deep supervised and hybrid model for multi-focus image
fusion [4.105749631623888]
Traditional and deep learning-based fusion methods generate the intermediate decision map through a series of post-processing procedures.
Inspired by the image reconstruction techniques based on deep learning, we propose a multi-focus image fusion network framework.
We show that the proposed approach for multi-focus image fusion achieves remarkable fusion performance compared to 19 state-of-the-art fusion methods.
arXiv Detail & Related papers (2021-01-12T14:33:13Z) - DIDFuse: Deep Image Decomposition for Infrared and Visible Image Fusion [28.7553352357059]
This paper proposes a novel auto-encoder based fusion network.
The encoder decomposes an image into background and detail feature maps with low- and high-frequency information, respectively.
In the test phase, background and detail feature maps are respectively merged via a fusion module, and the fused image robustness is recovered by the decoder.
arXiv Detail & Related papers (2020-03-20T11:45:20Z) - Gated Fusion Network for Degraded Image Super Resolution [78.67168802945069]
We propose a dual-branch convolutional neural network to extract base features and recovered features separately.
By decomposing the feature extraction step into two task-independent streams, the dual-branch model can facilitate the training process.
arXiv Detail & Related papers (2020-03-02T13:28:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.