Gradient-based multi-focus image fusion with focus-aware saliency enhancement
- URL: http://arxiv.org/abs/2509.22392v1
- Date: Fri, 26 Sep 2025 14:20:44 GMT
- Title: Gradient-based multi-focus image fusion with focus-aware saliency enhancement
- Authors: Haoyu Li, XiaoSong Li,
- Abstract summary: Multi-focus image fusion (MFIF) aims to yield an all-focused image from multiple partially focused inputs.<n>We propose a MFIF method based on significant boundary enhancement, which generates high-quality fused boundaries.<n>Our method consistently outperforms 12 state-of-the-art methods in both subjective and objective evaluations.
- Score: 18.335216974790754
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multi-focus image fusion (MFIF) aims to yield an all-focused image from multiple partially focused inputs, which is crucial in applications cover sur-veillance, microscopy, and computational photography. However, existing methods struggle to preserve sharp focus-defocus boundaries, often resulting in blurred transitions and focused details loss. To solve this problem, we propose a MFIF method based on significant boundary enhancement, which generates high-quality fused boundaries while effectively detecting focus in-formation. Particularly, we propose a gradient-domain-based model that can obtain initial fusion results with complete boundaries and effectively pre-serve the boundary details. Additionally, we introduce Tenengrad gradient detection to extract salient features from both the source images and the ini-tial fused image, generating the corresponding saliency maps. For boundary refinement, we develop a focus metric based on gradient and complementary information, integrating the salient features with the complementary infor-mation across images to emphasize focused regions and produce a high-quality initial decision result. Extensive experiments on four public datasets demonstrate that our method consistently outperforms 12 state-of-the-art methods in both subjective and objective evaluations. We have realized codes in https://github.com/Lihyua/GICI
Related papers
- AngularFuse: A Closer Look at Angle-based Perception for Spatial-Sensitive Multi-Modality Image Fusion [54.84069863008752]
This paper proposes an angle-based perception framework for spatial-sensitive image fusion (AngularFuse)<n>By combining Laplacian edge enhancement with adaptive histogram, reference images with richer details and more balanced brightness are generated.<n>Experiments on the MSRS, RoadScene, and M3FD public datasets show that AngularFuse outperforms existing mainstream methods with clear margin.
arXiv Detail & Related papers (2025-10-14T08:13:15Z) - SGDFuse: SAM-Guided Diffusion for High-Fidelity Infrared and Visible Image Fusion [65.80051636480836]
This paper proposes a conditional diffusion model guided by the Segment Anything Model (SAM) to achieve high-fidelity and semantically-aware image fusion.<n>The framework operates in a two-stage process: it first performs a preliminary fusion of multi-modal features, and then utilizes the semantic masks as a condition to drive the diffusion model's coarse-to-fine denoising generation.<n>Extensive experiments demonstrate that SGDFuse achieves state-of-the-art performance in both subjective and objective evaluations.
arXiv Detail & Related papers (2025-08-07T10:58:52Z) - DFVO: Learning Darkness-free Visible and Infrared Image Disentanglement and Fusion All at Once [57.15043822199561]
A Darkness-Free network is proposed to handle Visible and infrared image disentanglement and fusion all at Once (DFVO)<n>DFVO employs a cascaded multi-task approach to replace the traditional two-stage cascaded training (enhancement and fusion)<n>Our proposed approach outperforms state-of-the-art alternatives in terms of qualitative and quantitative evaluations.
arXiv Detail & Related papers (2025-05-07T15:59:45Z) - SAMF: Small-Area-Aware Multi-focus Image Fusion for Object Detection [6.776991635789825]
Existing multi-focus image fusion (MFIF) methods often fail to preserve the uncertain transition region.
This study proposes a new small-area-aware MFIF algorithm for enhancing object detection capability.
arXiv Detail & Related papers (2024-01-16T13:35:28Z) - From Text to Pixels: A Context-Aware Semantic Synergy Solution for
Infrared and Visible Image Fusion [66.33467192279514]
We introduce a text-guided multi-modality image fusion method that leverages the high-level semantics from textual descriptions to integrate semantics from infrared and visible images.
Our method not only produces visually superior fusion results but also achieves a higher detection mAP over existing methods, achieving state-of-the-art results.
arXiv Detail & Related papers (2023-12-31T08:13:47Z) - Bridging the Gap between Multi-focus and Multi-modal: A Focused
Integration Framework for Multi-modal Image Fusion [5.417493475406649]
Multi-modal image fusion (MMIF) integrates valuable information from different modality images into a fused one.
This paper proposes a MMIF framework for joint focused integration and modalities information extraction.
The proposed algorithm can surpass the state-of-the-art methods in visual perception and quantitative evaluation.
arXiv Detail & Related papers (2023-11-03T12:58:39Z) - Multi-Focus Image Fusion based on Gradient Transform [0.0]
We introduce a novel gradient information-based multi-focus image fusion method that is robust for the aforementioned problems.
The proposed method is compared with 17 different novel and conventional techniques both visually and objectively.
It is observed that the proposed method is promising according to visual evaluation and 83.3% success is achieved by being first in five out of six metrics according to objective evaluation.
arXiv Detail & Related papers (2022-04-20T20:35:12Z) - Light Field Saliency Detection with Dual Local Graph Learning
andReciprocative Guidance [148.9832328803202]
We model the infor-mation fusion within focal stack via graph networks.
We build a novel dual graph modelto guide the focal stack fusion process using all-focus pat-terns.
arXiv Detail & Related papers (2021-10-02T00:54:39Z) - Light Field Reconstruction via Deep Adaptive Fusion of Hybrid Lenses [67.01164492518481]
This paper explores the problem of reconstructing high-resolution light field (LF) images from hybrid lenses.
We propose a novel end-to-end learning-based approach, which can comprehensively utilize the specific characteristics of the input.
Our framework could potentially decrease the cost of high-resolution LF data acquisition and benefit LF data storage and transmission.
arXiv Detail & Related papers (2021-02-14T06:44:47Z) - Towards Reducing Severe Defocus Spread Effects for Multi-Focus Image
Fusion via an Optimization Based Strategy [22.29205225281694]
Multi-focus image fusion (MFF) is a popular technique to generate an all-in-focus image.
This paper presents an optimization-based approach to reduce defocus spread effects.
Experiments conducted on the real-world dataset verify superiority of the proposed model.
arXiv Detail & Related papers (2020-12-29T09:26:41Z) - MFIF-GAN: A New Generative Adversarial Network for Multi-Focus Image
Fusion [29.405149234582623]
Multi-Focus Image Fusion (MFIF) is a promising technique to obtain all-in-focus images.
One of the research trends of MFIF is to avoid the defocus spread effect (DSE) around the focus/defocus boundary (FDB)
We propose a network termed MFIF-GAN to generate focus maps in which the foreground region are correctly larger than the corresponding objects.
arXiv Detail & Related papers (2020-09-21T09:36:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.