RFN-Nest: An end-to-end residual fusion network for infrared and visible
images
- URL: http://arxiv.org/abs/2103.04286v1
- Date: Sun, 7 Mar 2021 07:29:50 GMT
- Title: RFN-Nest: An end-to-end residual fusion network for infrared and visible
images
- Authors: Hui Li, Xiao-Jun Wu, Josef Kittler
- Abstract summary: We propose an end-to-end fusion network architecture (RFN-Nest) for infrared and visible image fusion.
A novel detail-preserving loss function, and a feature enhancing loss function are proposed to train RFN.
The experimental results on public domain data sets show that, compared with the existing methods, our end-to-end fusion network delivers a better performance than the state-of-the-art methods.
- Score: 37.935940961760785
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In the image fusion field, the design of deep learning-based fusion methods
is far from routine. It is invariably fusion-task specific and requires a
careful consideration. The most difficult part of the design is to choose an
appropriate strategy to generate the fused image for a specific task in hand.
Thus, devising learnable fusion strategy is a very challenging problem in the
community of image fusion. To address this problem, a novel end-to-end fusion
network architecture (RFN-Nest) is developed for infrared and visible image
fusion. We propose a residual fusion network (RFN) which is based on a residual
architecture to replace the traditional fusion approach. A novel
detail-preserving loss function, and a feature enhancing loss function are
proposed to train RFN. The fusion model learning is accomplished by a novel
two-stage training strategy. In the first stage, we train an auto-encoder based
on an innovative nest connection (Nest) concept. Next, the RFN is trained using
the proposed loss functions. The experimental results on public domain data
sets show that, compared with the existing methods, our end-to-end fusion
network delivers a better performance than the state-of-the-art methods in both
subjective and objective evaluation. The code of our fusion method is available
at https://github.com/hli1221/imagefusion-rfn-nest
Related papers
- DAF-Net: A Dual-Branch Feature Decomposition Fusion Network with Domain Adaptive for Infrared and Visible Image Fusion [21.64382683858586]
Infrared and visible image fusion aims to combine complementary information from both modalities to provide a more comprehensive scene understanding.
We propose a dual-branch feature decomposition fusion network (DAF-Net) with Maximum domain adaptive.
By incorporating MK-MMD, the DAF-Net effectively aligns the latent feature spaces of visible and infrared images, thereby improving the quality of the fused images.
arXiv Detail & Related papers (2024-09-18T02:14:08Z) - FuseFormer: A Transformer for Visual and Thermal Image Fusion [3.6064695344878093]
We propose a novel methodology for the image fusion problem that mitigates the limitations associated with using classical evaluation metrics as loss functions.
Our approach integrates a transformer-based multi-scale fusion strategy that adeptly addresses local and global context information.
Our proposed method, along with the novel loss function definition, demonstrates superior performance compared to other competitive fusion algorithms.
arXiv Detail & Related papers (2024-02-01T19:40:39Z) - ReFusion: Learning Image Fusion from Reconstruction with Learnable Loss
via Meta-Learning [17.91346343984845]
We introduce a unified image fusion framework based on meta-learning, named ReFusion.
ReFusion employs a parameterized loss function, dynamically adjusted by the training framework according to the specific scenario and task.
It is capable of adapting to various tasks, including infrared-visible, medical, multi-focus, and multi-exposure image fusion.
arXiv Detail & Related papers (2023-12-13T07:40:39Z) - A Task-guided, Implicitly-searched and Meta-initialized Deep Model for
Image Fusion [69.10255211811007]
We present a Task-guided, Implicit-searched and Meta- generalizationd (TIM) deep model to address the image fusion problem in a challenging real-world scenario.
Specifically, we propose a constrained strategy to incorporate information from downstream tasks to guide the unsupervised learning process of image fusion.
Within this framework, we then design an implicit search scheme to automatically discover compact architectures for our fusion model with high efficiency.
arXiv Detail & Related papers (2023-05-25T08:54:08Z) - LRRNet: A Novel Representation Learning Guided Fusion Network for
Infrared and Visible Images [98.36300655482196]
We formulate the fusion task mathematically, and establish a connection between its optimal solution and the network architecture that can implement it.
In particular we adopt a learnable representation approach to the fusion task, in which the construction of the fusion network architecture is guided by the optimisation algorithm producing the learnable model.
Based on this novel network architecture, an end-to-end lightweight fusion network is constructed to fuse infrared and visible light images.
arXiv Detail & Related papers (2023-04-11T12:11:23Z) - CDDFuse: Correlation-Driven Dual-Branch Feature Decomposition for
Multi-Modality Image Fusion [138.40422469153145]
We propose a novel Correlation-Driven feature Decomposition Fusion (CDDFuse) network.
We show that CDDFuse achieves promising results in multiple fusion tasks, including infrared-visible image fusion and medical image fusion.
arXiv Detail & Related papers (2022-11-26T02:40:28Z) - CoCoNet: Coupled Contrastive Learning Network with Multi-level Feature
Ensemble for Multi-modality Image Fusion [72.8898811120795]
We propose a coupled contrastive learning network, dubbed CoCoNet, to realize infrared and visible image fusion.
Our method achieves state-of-the-art (SOTA) performance under both subjective and objective evaluation.
arXiv Detail & Related papers (2022-11-20T12:02:07Z) - Image Fusion Transformer [75.71025138448287]
In image fusion, images obtained from different sensors are fused to generate a single image with enhanced information.
In recent years, state-of-the-art methods have adopted Convolution Neural Networks (CNNs) to encode meaningful features for image fusion.
We propose a novel Image Fusion Transformer (IFT) where we develop a transformer-based multi-scale fusion strategy.
arXiv Detail & Related papers (2021-07-19T16:42:49Z) - UFA-FUSE: A novel deep supervised and hybrid model for multi-focus image
fusion [4.105749631623888]
Traditional and deep learning-based fusion methods generate the intermediate decision map through a series of post-processing procedures.
Inspired by the image reconstruction techniques based on deep learning, we propose a multi-focus image fusion network framework.
We show that the proposed approach for multi-focus image fusion achieves remarkable fusion performance compared to 19 state-of-the-art fusion methods.
arXiv Detail & Related papers (2021-01-12T14:33:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.