Bridging the Gap between Multi-focus and Multi-modal: A Focused
Integration Framework for Multi-modal Image Fusion
- URL: http://arxiv.org/abs/2311.01886v2
- Date: Wed, 31 Jan 2024 12:13:49 GMT
- Title: Bridging the Gap between Multi-focus and Multi-modal: A Focused
Integration Framework for Multi-modal Image Fusion
- Authors: Xilai Li, Xiaosong Li, Tao Ye, Xiaoqi Cheng, Wuyang Liu, Haishu Tan
- Abstract summary: Multi-modal image fusion (MMIF) integrates valuable information from different modality images into a fused one.
This paper proposes a MMIF framework for joint focused integration and modalities information extraction.
The proposed algorithm can surpass the state-of-the-art methods in visual perception and quantitative evaluation.
- Score: 5.417493475406649
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multi-modal image fusion (MMIF) integrates valuable information from
different modality images into a fused one. However, the fusion of multiple
visible images with different focal regions and infrared images is a
unprecedented challenge in real MMIF applications. This is because of the
limited depth of the focus of visible optical lenses, which impedes the
simultaneous capture of the focal information within the same scene. To address
this issue, in this paper, we propose a MMIF framework for joint focused
integration and modalities information extraction. Specifically, a
semi-sparsity-based smoothing filter is introduced to decompose the images into
structure and texture components. Subsequently, a novel multi-scale operator is
proposed to fuse the texture components, capable of detecting significant
information by considering the pixel focus attributes and relevant data from
various modal images. Additionally, to achieve an effective capture of scene
luminance and reasonable contrast maintenance, we consider the distribution of
energy information in the structural components in terms of multi-directional
frequency variance and information entropy. Extensive experiments on existing
MMIF datasets, as well as the object detection and depth estimation tasks,
consistently demonstrate that the proposed algorithm can surpass the
state-of-the-art methods in visual perception and quantitative evaluation. The
code is available at https://github.com/ixilai/MFIF-MMIF.
Related papers
- Rethinking Normalization Strategies and Convolutional Kernels for Multimodal Image Fusion [25.140475569677758]
Multimodal image fusion aims to integrate information from different modalities to obtain a comprehensive image.
Existing methods tend to prioritize natural image fusion and focus on information complementary and network training strategies.
This paper dissects the significant differences between the two tasks regarding fusion goals, statistical properties, and data distribution.
arXiv Detail & Related papers (2024-11-15T08:36:24Z) - A Global Depth-Range-Free Multi-View Stereo Transformer Network with Pose Embedding [76.44979557843367]
We propose a novel multi-view stereo (MVS) framework that gets rid of the depth range prior.
We introduce a Multi-view Disparity Attention (MDA) module to aggregate long-range context information.
We explicitly estimate the quality of the current pixel corresponding to sampled points on the epipolar line of the source image.
arXiv Detail & Related papers (2024-11-04T08:50:16Z) - Fusion from Decomposition: A Self-Supervised Approach for Image Fusion and Beyond [74.96466744512992]
The essence of image fusion is to integrate complementary information from source images.
DeFusion++ produces versatile fused representations that can enhance the quality of image fusion and the effectiveness of downstream high-level vision tasks.
arXiv Detail & Related papers (2024-10-16T06:28:49Z) - MMA-UNet: A Multi-Modal Asymmetric UNet Architecture for Infrared and Visible Image Fusion [4.788349093716269]
Multi-modal image fusion (MMIF) maps useful information from various modalities into the same representation space.
The existing fusion algorithms tend to symmetrically fuse the multi-modal images, causing the loss of shallow information or bias towards a single modality.
In this study, we analyzed the spatial distribution differences of information in different modalities and proved that encoding features within the same network is not conducive to achieving simultaneous deep feature space alignment.
arXiv Detail & Related papers (2024-04-27T01:35:21Z) - A Dual Domain Multi-exposure Image Fusion Network based on the
Spatial-Frequency Integration [57.14745782076976]
Multi-exposure image fusion aims to generate a single high-dynamic image by integrating images with different exposures.
We propose a novelty perspective on multi-exposure image fusion via the Spatial-Frequency Integration Framework, named MEF-SFI.
Our method achieves visual-appealing fusion results against state-of-the-art multi-exposure image fusion approaches.
arXiv Detail & Related papers (2023-12-17T04:45:15Z) - Fusion of Infrared and Visible Images based on Spatial-Channel
Attentional Mechanism [3.388001684915793]
We present AMFusionNet, an innovative approach to infrared and visible image fusion (IVIF)
By assimilating thermal details from infrared images with texture features from visible sources, our method produces images enriched with comprehensive information.
Our method outperforms state-of-the-art algorithms in terms of quality and quantity.
arXiv Detail & Related papers (2023-08-25T21:05:11Z) - Equivariant Multi-Modality Image Fusion [124.11300001864579]
We propose the Equivariant Multi-Modality imAge fusion paradigm for end-to-end self-supervised learning.
Our approach is rooted in the prior knowledge that natural imaging responses are equivariant to certain transformations.
Experiments confirm that EMMA yields high-quality fusion results for infrared-visible and medical images.
arXiv Detail & Related papers (2023-05-19T05:50:24Z) - An Interactively Reinforced Paradigm for Joint Infrared-Visible Image
Fusion and Saliency Object Detection [59.02821429555375]
This research focuses on the discovery and localization of hidden objects in the wild and serves unmanned systems.
Through empirical analysis, infrared and visible image fusion (IVIF) enables hard-to-find objects apparent.
multimodal salient object detection (SOD) accurately delineates the precise spatial location of objects within the picture.
arXiv Detail & Related papers (2023-05-17T06:48:35Z) - Multi-modal Gated Mixture of Local-to-Global Experts for Dynamic Image
Fusion [59.19469551774703]
Infrared and visible image fusion aims to integrate comprehensive information from multiple sources to achieve superior performances on various practical tasks.
We propose a dynamic image fusion framework with a multi-modal gated mixture of local-to-global experts.
Our model consists of a Mixture of Local Experts (MoLE) and a Mixture of Global Experts (MoGE) guided by a multi-modal gate.
arXiv Detail & Related papers (2023-02-02T20:06:58Z) - MFIF-GAN: A New Generative Adversarial Network for Multi-Focus Image
Fusion [29.405149234582623]
Multi-Focus Image Fusion (MFIF) is a promising technique to obtain all-in-focus images.
One of the research trends of MFIF is to avoid the defocus spread effect (DSE) around the focus/defocus boundary (FDB)
We propose a network termed MFIF-GAN to generate focus maps in which the foreground region are correctly larger than the corresponding objects.
arXiv Detail & Related papers (2020-09-21T09:36:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.