DDFusion:Degradation-Decoupled Fusion Framework for Robust Infrared and Visible Images Fusion
- URL: http://arxiv.org/abs/2504.10871v2
- Date: Mon, 13 Oct 2025 14:48:24 GMT
- Title: DDFusion:Degradation-Decoupled Fusion Framework for Robust Infrared and Visible Images Fusion
- Authors: Tianpei Zhang, Jufeng Zhao, Yiming Zhu, Guangmang Cui, Yuxin Jing,
- Abstract summary: We propose a Degradation-Decoupled Fusion(DDFusion) framework.<n>DDFusion achieves superior fusion performance under both clean and degraded conditions.
- Score: 9.242363983469346
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Conventional infrared and visible image fusion(IVIF) methods often assume high-quality inputs, neglecting real-world degradations such as low-light and noise, which limits their practical applicability. To address this, we propose a Degradation-Decoupled Fusion(DDFusion) framework, which achieves degradation decoupling and jointly models degradation suppression and image fusion in a unified manner. Specifically, the Degradation-Decoupled Optimization Network(DDON) performs degradation-specific decomposition to decouple inter-degradation and degradation-information components, followed by component-specific extraction paths for effective suppression of degradation and enhancement of informative features. The Interactive Local-Global Fusion Network (ILGFN) aggregates complementary features across multi-scale pathways and alleviates performance degradation caused by the decoupling between degradation optimization and image fusion. Extensive experiments demonstrate that DDFusion achieves superior fusion performance under both clean and degraded conditions. Our code is available at https://github.com/Lmmh058/DDFusion.
Related papers
- Reversible Efficient Diffusion for Image Fusion [66.35113261837469]
Multi-modal image fusion aims to consolidate complementary information from diverse source images into a unified representation.<n>While diffusion models have demonstrated impressive generative capabilities in image generation, they often suffer from detail loss when applied to image fusion tasks.<n>This issue arises from the accumulation of noise errors inherent in the Markov process, leading to inconsistency and degradation in the fused results.<n>We propose the Reversible Efficient Diffusion (RED) model - an explicitly supervised training framework that inherits the powerful generative capability of diffusion models while avoiding the distribution estimation.
arXiv Detail & Related papers (2026-01-28T05:14:55Z) - MdaIF: Robust One-Stop Multi-Degradation-Aware Image Fusion with Language-Driven Semantics [8.783211177601045]
Infrared and visible image fusion aims to integrate complementary multi-modal information into a single fused result.<n>We propose a one-stop degradation-aware image fusion framework for multi-degradation scenarios driven by a large language model (MdaIF)<n>To adaptively extract diverse weather-aware degradation knowledge and scene feature representations, we employ a pre-trained vision-language model (VLM) in our framework.
arXiv Detail & Related papers (2025-11-16T09:43:12Z) - Coupled Degradation Modeling and Fusion: A VLM-Guided Degradation-Coupled Network for Degradation-Aware Infrared and Visible Image Fusion [9.915632806109555]
We propose a novel VLM-Guided Degradation-Coupled Fusion network (VGDCFusion)<n>Our VGDCFusion significantly outperforms existing state-of-the-art fusion approaches under various degraded image scenarios.
arXiv Detail & Related papers (2025-10-13T14:26:33Z) - Dual-Domain Perspective on Degradation-Aware Fusion: A VLM-Guided Robust Infrared and Visible Image Fusion Framework [9.915632806109555]
GD2Fusion is a novel framework that integrates vision-language models for degradation perception with dual-domain (frequency/spatial) joint optimization.<n>It achieves superior fusion performance compared with existing algorithms and strategies in dual-source degraded scenarios.
arXiv Detail & Related papers (2025-09-05T10:48:46Z) - SGDFuse: SAM-Guided Diffusion for High-Fidelity Infrared and Visible Image Fusion [38.09521879556221]
This paper proposes a conditional diffusion model guided by the Segment Anything Model (SAM) to achieve high-fidelity and semantically-aware image fusion.<n>The framework operates in a two-stage process: it first performs a preliminary fusion of multi-modal features, and then utilizes the semantic masks as a condition to drive the diffusion model's coarse-to-fine denoising generation.<n>Extensive experiments demonstrate that SGDFuse achieves state-of-the-art performance in both subjective and objective evaluations.
arXiv Detail & Related papers (2025-08-07T10:58:52Z) - Infrared and Visible Image Fusion Based on Implicit Neural Representations [3.8530055385287403]
Infrared and visible light image fusion aims to combine the strengths of both modalities to generate images that are rich in information.<n>This paper proposes an image fusion method based on Implicit Neural Representations (INR), referred to as INRFuse.<n> Experimental results indicate that INRFuse outperforms existing methods in both subjective visual quality and objective evaluation metrics.
arXiv Detail & Related papers (2025-06-20T06:34:19Z) - DFVO: Learning Darkness-free Visible and Infrared Image Disentanglement and Fusion All at Once [57.15043822199561]
A Darkness-Free network is proposed to handle Visible and infrared image disentanglement and fusion all at Once (DFVO)<n>DFVO employs a cascaded multi-task approach to replace the traditional two-stage cascaded training (enhancement and fusion)<n>Our proposed approach outperforms state-of-the-art alternatives in terms of qualitative and quantitative evaluations.
arXiv Detail & Related papers (2025-05-07T15:59:45Z) - ControlFusion: A Controllable Image Fusion Framework with Language-Vision Degradation Prompts [82.52042409680267]
Current image fusion methods struggle to address the composite degradations encountered in real-world imaging scenarios.<n>We propose a controllable image fusion framework with language-vision prompts, termed ControlFusion.<n>In experiments, ControlFusion outperforms SOTA fusion methods in fusion quality and degradation handling.
arXiv Detail & Related papers (2025-03-30T08:18:53Z) - DSPFusion: Image Fusion via Degradation and Semantic Dual-Prior Guidance [48.84182709640984]
Existing fusion methods are tailored for high-quality images but struggle with degraded images captured under harsh circumstances.<n>This work presents a textbfDegradation and textbfSemantic textbfPrior dual-guided framework for degraded image textbfFusion (textbfDSPFusion)
arXiv Detail & Related papers (2025-03-30T08:18:50Z) - Contourlet Refinement Gate Framework for Thermal Spectrum Distribution Regularized Infrared Image Super-Resolution [54.293362972473595]
Image super-resolution (SR) aims to reconstruct high-resolution (HR) images from their low-resolution (LR) counterparts.
Current approaches to address SR tasks are either dedicated to extracting RGB image features or assuming similar degradation patterns.
We propose a Contourlet refinement gate framework to restore infrared modal-specific features while preserving spectral distribution fidelity.
arXiv Detail & Related papers (2024-11-19T14:24:03Z) - Infrared-Assisted Single-Stage Framework for Joint Restoration and Fusion of Visible and Infrared Images under Hazy Conditions [9.415977819944246]
We propose a joint learning framework that utilizes infrared image for the restoration and fusion of hazy IR-VIS images.<n>Our method effectively fuses IR-VIS images while removing haze, yielding clear, haze-free fusion results.
arXiv Detail & Related papers (2024-11-16T02:57:12Z) - DAF-Net: A Dual-Branch Feature Decomposition Fusion Network with Domain Adaptive for Infrared and Visible Image Fusion [21.64382683858586]
Infrared and visible image fusion aims to combine complementary information from both modalities to provide a more comprehensive scene understanding.
We propose a dual-branch feature decomposition fusion network (DAF-Net) with Maximum domain adaptive.
By incorporating MK-MMD, the DAF-Net effectively aligns the latent feature spaces of visible and infrared images, thereby improving the quality of the fused images.
arXiv Detail & Related papers (2024-09-18T02:14:08Z) - A Dual Domain Multi-exposure Image Fusion Network based on the
Spatial-Frequency Integration [57.14745782076976]
Multi-exposure image fusion aims to generate a single high-dynamic image by integrating images with different exposures.
We propose a novelty perspective on multi-exposure image fusion via the Spatial-Frequency Integration Framework, named MEF-SFI.
Our method achieves visual-appealing fusion results against state-of-the-art multi-exposure image fusion approaches.
arXiv Detail & Related papers (2023-12-17T04:45:15Z) - IAIFNet: An Illumination-Aware Infrared and Visible Image Fusion Network [13.11361803763253]
We propose an Illumination-Aware Infrared and Visible Image Fusion Network, named as IAIFNet.
In our framework, an illumination enhancement network first estimates the incident illumination maps of input images.
With the help of proposed adaptive differential fusion module (ADFM) and salient target aware module (STAM), an image fusion network effectively integrates the salient features of the illumination-enhanced infrared and visible images into a fusion image of high visual quality.
arXiv Detail & Related papers (2023-09-26T15:12:29Z) - An Interactively Reinforced Paradigm for Joint Infrared-Visible Image
Fusion and Saliency Object Detection [59.02821429555375]
This research focuses on the discovery and localization of hidden objects in the wild and serves unmanned systems.
Through empirical analysis, infrared and visible image fusion (IVIF) enables hard-to-find objects apparent.
multimodal salient object detection (SOD) accurately delineates the precise spatial location of objects within the picture.
arXiv Detail & Related papers (2023-05-17T06:48:35Z) - DDFM: Denoising Diffusion Model for Multi-Modality Image Fusion [144.9653045465908]
We propose a novel fusion algorithm based on the denoising diffusion probabilistic model (DDPM)
Our approach yields promising fusion results in infrared-visible image fusion and medical image fusion.
arXiv Detail & Related papers (2023-03-13T04:06:42Z) - CDDFuse: Correlation-Driven Dual-Branch Feature Decomposition for
Multi-Modality Image Fusion [138.40422469153145]
We propose a novel Correlation-Driven feature Decomposition Fusion (CDDFuse) network.
We show that CDDFuse achieves promising results in multiple fusion tasks, including infrared-visible image fusion and medical image fusion.
arXiv Detail & Related papers (2022-11-26T02:40:28Z) - CoCoNet: Coupled Contrastive Learning Network with Multi-level Feature Ensemble for Multi-modality Image Fusion [68.78897015832113]
We propose a coupled contrastive learning network, dubbed CoCoNet, to realize infrared and visible image fusion.<n>Our method achieves state-of-the-art (SOTA) performance under both subjective and objective evaluation.
arXiv Detail & Related papers (2022-11-20T12:02:07Z) - Infrared and Visible Image Fusion via Interactive Compensatory Attention
Adversarial Learning [7.995162257955025]
We propose a novel end-to-end mode based on generative adversarial training to achieve better fusion balance.
In particular, in the generator, we construct a multi-level encoder-decoder network with a triple path, and adopt infrared and visible paths to provide additional intensity and information gradient.
In addition, dual discriminators are designed to identify the similar distribution between fused result and source images, and the generator is optimized to produce a more balanced result.
arXiv Detail & Related papers (2022-03-29T08:28:14Z) - TGFuse: An Infrared and Visible Image Fusion Approach Based on
Transformer and Generative Adversarial Network [15.541268697843037]
We propose an infrared and visible image fusion algorithm based on a lightweight transformer module and adversarial learning.
Inspired by the global interaction power, we use the transformer technique to learn the effective global fusion relations.
The experimental performance demonstrates the effectiveness of the proposed modules, with superior improvement against the state-of-the-art.
arXiv Detail & Related papers (2022-01-25T07:43:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.