Mutual-Guided Dynamic Network for Image Fusion
- URL: http://arxiv.org/abs/2308.12538v2
- Date: Fri, 1 Sep 2023 04:51:13 GMT
- Title: Mutual-Guided Dynamic Network for Image Fusion
- Authors: Yuanshen Guan, Ruikang Xu, Mingde Yao, Lizhi Wang, Zhiwei Xiong
- Abstract summary: We propose a novel mutual-guided dynamic network (MGDN) for image fusion, which allows for effective information utilization across different locations and inputs.
Experimental results on five benchmark datasets demonstrate that our proposed method outperforms existing methods on four image fusion tasks.
- Score: 51.615598671899335
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Image fusion aims to generate a high-quality image from multiple images
captured under varying conditions. The key problem of this task is to preserve
complementary information while filtering out irrelevant information for the
fused result. However, existing methods address this problem by leveraging
static convolutional neural networks (CNNs), suffering two inherent limitations
during feature extraction, i.e., being unable to handle spatial-variant
contents and lacking guidance from multiple inputs. In this paper, we propose a
novel mutual-guided dynamic network (MGDN) for image fusion, which allows for
effective information utilization across different locations and inputs.
Specifically, we design a mutual-guided dynamic filter (MGDF) for adaptive
feature extraction, composed of a mutual-guided cross-attention (MGCA) module
and a dynamic filter predictor, where the former incorporates additional
guidance from different inputs and the latter generates spatial-variant kernels
for different locations. In addition, we introduce a parallel feature fusion
(PFF) module to effectively fuse local and global information of the extracted
features. To further reduce the redundancy among the extracted features while
simultaneously preserving their shared structural information, we devise a
novel loss function that combines the minimization of normalized mutual
information (NMI) with an estimated gradient mask. Experimental results on five
benchmark datasets demonstrate that our proposed method outperforms existing
methods on four image fusion tasks. The code and model are publicly available
at: https://github.com/Guanys-dar/MGDN.
Related papers
- Rethinking Normalization Strategies and Convolutional Kernels for Multimodal Image Fusion [25.140475569677758]
Multimodal image fusion aims to integrate information from different modalities to obtain a comprehensive image.
Existing methods tend to prioritize natural image fusion and focus on information complementary and network training strategies.
This paper dissects the significant differences between the two tasks regarding fusion goals, statistical properties, and data distribution.
arXiv Detail & Related papers (2024-11-15T08:36:24Z) - Fusion from Decomposition: A Self-Supervised Approach for Image Fusion and Beyond [74.96466744512992]
The essence of image fusion is to integrate complementary information from source images.
DeFusion++ produces versatile fused representations that can enhance the quality of image fusion and the effectiveness of downstream high-level vision tasks.
arXiv Detail & Related papers (2024-10-16T06:28:49Z) - DAF-Net: A Dual-Branch Feature Decomposition Fusion Network with Domain Adaptive for Infrared and Visible Image Fusion [21.64382683858586]
Infrared and visible image fusion aims to combine complementary information from both modalities to provide a more comprehensive scene understanding.
We propose a dual-branch feature decomposition fusion network (DAF-Net) with Maximum domain adaptive.
By incorporating MK-MMD, the DAF-Net effectively aligns the latent feature spaces of visible and infrared images, thereby improving the quality of the fused images.
arXiv Detail & Related papers (2024-09-18T02:14:08Z) - CasDyF-Net: Image Dehazing via Cascaded Dynamic Filters [0.0]
Image dehazing aims to restore image clarity and visual quality by reducing atmospheric scattering and absorption effects.
Inspired by dynamic filtering, we propose using cascaded dynamic filters to create a multi-branch network.
Experiments on RESIDE, Haze4K, and O-Haze datasets validate our method's effectiveness.
arXiv Detail & Related papers (2024-09-13T03:20:38Z) - DiAD: A Diffusion-based Framework for Multi-class Anomaly Detection [55.48770333927732]
We propose a Difusion-based Anomaly Detection (DiAD) framework for multi-class anomaly detection.
It consists of a pixel-space autoencoder, a latent-space Semantic-Guided (SG) network with a connection to the stable diffusion's denoising network, and a feature-space pre-trained feature extractor.
Experiments on MVTec-AD and VisA datasets demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2023-12-11T18:38:28Z) - CDDFuse: Correlation-Driven Dual-Branch Feature Decomposition for
Multi-Modality Image Fusion [138.40422469153145]
We propose a novel Correlation-Driven feature Decomposition Fusion (CDDFuse) network.
We show that CDDFuse achieves promising results in multiple fusion tasks, including infrared-visible image fusion and medical image fusion.
arXiv Detail & Related papers (2022-11-26T02:40:28Z) - Semantic Labeling of High Resolution Images Using EfficientUNets and
Transformers [5.177947445379688]
We propose a new segmentation model that combines convolutional neural networks with deep transformers.
Our results demonstrate that the proposed methodology improves segmentation accuracy compared to state-of-the-art techniques.
arXiv Detail & Related papers (2022-06-20T12:03:54Z) - EPMF: Efficient Perception-aware Multi-sensor Fusion for 3D Semantic Segmentation [62.210091681352914]
We study multi-sensor fusion for 3D semantic segmentation for many applications, such as autonomous driving and robotics.
In this work, we investigate a collaborative fusion scheme called perception-aware multi-sensor fusion (PMF)
We propose a two-stream network to extract features from the two modalities separately. The extracted features are fused by effective residual-based fusion modules.
arXiv Detail & Related papers (2021-06-21T10:47:26Z) - Adaptive Context-Aware Multi-Modal Network for Depth Completion [107.15344488719322]
We propose to adopt the graph propagation to capture the observed spatial contexts.
We then apply the attention mechanism on the propagation, which encourages the network to model the contextual information adaptively.
Finally, we introduce the symmetric gated fusion strategy to exploit the extracted multi-modal features effectively.
Our model, named Adaptive Context-Aware Multi-Modal Network (ACMNet), achieves the state-of-the-art performance on two benchmarks.
arXiv Detail & Related papers (2020-08-25T06:00:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.