LE2Fusion: A novel local edge enhancement module for infrared and
visible image fusion
- URL: http://arxiv.org/abs/2305.17374v1
- Date: Sat, 27 May 2023 05:37:02 GMT
- Title: LE2Fusion: A novel local edge enhancement module for infrared and
visible image fusion
- Authors: Yongbiao Xiao, Hui Li, Chunyang Cheng, and Xiaoning Song
- Abstract summary: Under complex illumination conditions, few algorithms pay attention to the edge information of local regions.
We propose a fusion network based on the local edge enhancement, named LE2Fusion.
Experiments demonstrate that the proposed method exhibits better fusion performance than the state-of-the-art fusion methods on public datasets.
- Score: 7.11574718614606
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Infrared and visible image fusion task aims to generate a fused image which
contains salient features and rich texture details from multi-source images.
However, under complex illumination conditions, few algorithms pay attention to
the edge information of local regions which is crucial for downstream tasks. To
this end, we propose a fusion network based on the local edge enhancement,
named LE2Fusion. Specifically, a local edge enhancement (LE2) module is
proposed to improve the edge information under complex illumination conditions
and preserve the essential features of image. For feature extraction, a
multi-scale residual attention (MRA) module is applied to extract rich
features. Then, with LE2, a set of enhancement weights are generated which are
utilized in feature fusion strategy and used to guide the image reconstruction.
To better preserve the local detail information and structure information, the
pixel intensity loss function based on the local region is also presented. The
experiments demonstrate that the proposed method exhibits better fusion
performance than the state-of-the-art fusion methods on public datasets.
Related papers
- Fusion from Decomposition: A Self-Supervised Approach for Image Fusion and Beyond [74.96466744512992]
The essence of image fusion is to integrate complementary information from source images.
DeFusion++ produces versatile fused representations that can enhance the quality of image fusion and the effectiveness of downstream high-level vision tasks.
arXiv Detail & Related papers (2024-10-16T06:28:49Z) - DAF-Net: A Dual-Branch Feature Decomposition Fusion Network with Domain Adaptive for Infrared and Visible Image Fusion [21.64382683858586]
Infrared and visible image fusion aims to combine complementary information from both modalities to provide a more comprehensive scene understanding.
We propose a dual-branch feature decomposition fusion network (DAF-Net) with Maximum domain adaptive.
By incorporating MK-MMD, the DAF-Net effectively aligns the latent feature spaces of visible and infrared images, thereby improving the quality of the fused images.
arXiv Detail & Related papers (2024-09-18T02:14:08Z) - A Semantic-Aware and Multi-Guided Network for Infrared-Visible Image Fusion [41.34335755315773]
Multi-modality image fusion aims at fusing specific-modality and shared-modality information from two source images.
We propose a three-branch encoder-decoder architecture along with corresponding fusion layers as the fusion strategy.
Our method has obtained competitive results compared with state-of-the-art methods in visible/infrared image fusion and medical image fusion tasks.
arXiv Detail & Related papers (2024-06-11T09:32:40Z) - Spatial-frequency Dual-Domain Feature Fusion Network for Low-Light Remote Sensing Image Enhancement [49.15531684596958]
We propose a Dual-Domain Feature Fusion Network (DFFN) for low-light remote sensing image enhancement.
The first phase learns amplitude information to restore image brightness, and the second phase learns phase information to refine details.
We have constructed two dark light remote sensing datasets to address the current lack of datasets in dark light remote sensing image enhancement.
arXiv Detail & Related papers (2024-04-26T13:21:31Z) - DePF: A Novel Fusion Approach based on Decomposition Pooling for
Infrared and Visible Images [7.11574718614606]
A novel fusion network based on the decomposition pooling (de-pooling) manner is proposed, termed as DePF.
A de-pooling based encoder is designed to extract multi-scale image and detail features of source images at the same time.
The experimental results demonstrate that the proposed method exhibits superior fusion performance over the state-of-the-arts.
arXiv Detail & Related papers (2023-05-27T05:47:14Z) - Searching a Compact Architecture for Robust Multi-Exposure Image Fusion [55.37210629454589]
Two major stumbling blocks hinder the development, including pixel misalignment and inefficient inference.
This study introduces an architecture search-based paradigm incorporating self-alignment and detail repletion modules for robust multi-exposure image fusion.
The proposed method outperforms various competitive schemes, achieving a noteworthy 3.19% improvement in PSNR for general scenarios and an impressive 23.5% enhancement in misaligned scenarios.
arXiv Detail & Related papers (2023-05-20T17:01:52Z) - Multi-modal Gated Mixture of Local-to-Global Experts for Dynamic Image
Fusion [59.19469551774703]
Infrared and visible image fusion aims to integrate comprehensive information from multiple sources to achieve superior performances on various practical tasks.
We propose a dynamic image fusion framework with a multi-modal gated mixture of local-to-global experts.
Our model consists of a Mixture of Local Experts (MoLE) and a Mixture of Global Experts (MoGE) guided by a multi-modal gate.
arXiv Detail & Related papers (2023-02-02T20:06:58Z) - CDDFuse: Correlation-Driven Dual-Branch Feature Decomposition for
Multi-Modality Image Fusion [138.40422469153145]
We propose a novel Correlation-Driven feature Decomposition Fusion (CDDFuse) network.
We show that CDDFuse achieves promising results in multiple fusion tasks, including infrared-visible image fusion and medical image fusion.
arXiv Detail & Related papers (2022-11-26T02:40:28Z) - CoCoNet: Coupled Contrastive Learning Network with Multi-level Feature
Ensemble for Multi-modality Image Fusion [72.8898811120795]
We propose a coupled contrastive learning network, dubbed CoCoNet, to realize infrared and visible image fusion.
Our method achieves state-of-the-art (SOTA) performance under both subjective and objective evaluation.
arXiv Detail & Related papers (2022-11-20T12:02:07Z) - UFA-FUSE: A novel deep supervised and hybrid model for multi-focus image
fusion [4.105749631623888]
Traditional and deep learning-based fusion methods generate the intermediate decision map through a series of post-processing procedures.
Inspired by the image reconstruction techniques based on deep learning, we propose a multi-focus image fusion network framework.
We show that the proposed approach for multi-focus image fusion achieves remarkable fusion performance compared to 19 state-of-the-art fusion methods.
arXiv Detail & Related papers (2021-01-12T14:33:13Z) - When Image Decomposition Meets Deep Learning: A Novel Infrared and
Visible Image Fusion Method [27.507158159317417]
Infrared and visible image fusion is a hot topic in image processing and image enhancement.
We propose a novel dual-stream auto-encoder based fusion network.
arXiv Detail & Related papers (2020-09-02T19:32:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.