Quaternion Sparse Decomposition for Multi-focus Color Image Fusion
- URL: http://arxiv.org/abs/2505.02365v2
- Date: Wed, 06 Aug 2025 07:50:57 GMT
- Title: Quaternion Sparse Decomposition for Multi-focus Color Image Fusion
- Authors: Weihua Yang, Yicong Zhou,
- Abstract summary: Multi-focus color image fusion refers to integrating multiple partially focused color images to create a single all-in-focus color image.<n>Existing methods struggle with complex real-world scenarios due to limitations in handling color information and intricate textures.<n>This paper proposes a quaternion multi-focus color image fusion framework to perform high-quality color image fusion completely in the quaternion domain.
- Score: 38.47237002133678
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Multi-focus color image fusion refers to integrating multiple partially focused color images to create a single all-in-focus color image. However, existing methods struggle with complex real-world scenarios due to limitations in handling color information and intricate textures. To address these challenges, this paper proposes a quaternion multi-focus color image fusion framework to perform high-quality color image fusion completely in the quaternion domain. This framework introduces 1) a quaternion sparse decomposition model to jointly learn fine-scale image details and structure information of color images in an iterative fashion for high-precision focus detection, 2) a quaternion base-detail fusion strategy to individually fuse base-scale and detail-scale results across multiple color images for preserving structure and detail information, and 3) a quaternion structural similarity refinement strategy to adaptively select optimal patches from initial fusion results and obtain the final fused result for preserving fine details and ensuring spatially consistent outputs. Extensive experiments demonstrate that the proposed framework outperforms state-of-the-art methods.
Related papers
- Gradient-based multi-focus image fusion with focus-aware saliency enhancement [18.335216974790754]
Multi-focus image fusion (MFIF) aims to yield an all-focused image from multiple partially focused inputs.<n>We propose a MFIF method based on significant boundary enhancement, which generates high-quality fused boundaries.<n>Our method consistently outperforms 12 state-of-the-art methods in both subjective and objective evaluations.
arXiv Detail & Related papers (2025-09-26T14:20:44Z) - DFVO: Learning Darkness-free Visible and Infrared Image Disentanglement and Fusion All at Once [57.15043822199561]
A Darkness-Free network is proposed to handle Visible and infrared image disentanglement and fusion all at Once (DFVO)<n>DFVO employs a cascaded multi-task approach to replace the traditional two-stage cascaded training (enhancement and fusion)<n>Our proposed approach outperforms state-of-the-art alternatives in terms of qualitative and quantitative evaluations.
arXiv Detail & Related papers (2025-05-07T15:59:45Z) - Little Strokes Fell Great Oaks: Boosting the Hierarchical Features for Multi-exposure Image Fusion [18.53770637220984]
This study proposes a gamma correction module specifically designed to fully leverage latent information embedded within source images.
A novel color enhancement algorithm is presented to augment image saturation while preserving intricate details.
arXiv Detail & Related papers (2024-04-09T05:44:00Z) - From Text to Pixels: A Context-Aware Semantic Synergy Solution for
Infrared and Visible Image Fusion [66.33467192279514]
We introduce a text-guided multi-modality image fusion method that leverages the high-level semantics from textual descriptions to integrate semantics from infrared and visible images.
Our method not only produces visually superior fusion results but also achieves a higher detection mAP over existing methods, achieving state-of-the-art results.
arXiv Detail & Related papers (2023-12-31T08:13:47Z) - Generation and Recombination for Multifocus Image Fusion with Free
Number of Inputs [17.32596568119519]
Multifocus image fusion is an effective way to overcome the limitation of optical lenses.
Previous methods assume that the focused areas of the two source images are complementary, making it impossible to achieve simultaneous fusion of multiple images.
In GRFusion, focus property detection of each source image can be implemented independently, enabling simultaneous fusion of multiple source images.
arXiv Detail & Related papers (2023-09-09T01:47:56Z) - Searching a Compact Architecture for Robust Multi-Exposure Image Fusion [55.37210629454589]
Two major stumbling blocks hinder the development, including pixel misalignment and inefficient inference.
This study introduces an architecture search-based paradigm incorporating self-alignment and detail repletion modules for robust multi-exposure image fusion.
The proposed method outperforms various competitive schemes, achieving a noteworthy 3.19% improvement in PSNR for general scenarios and an impressive 23.5% enhancement in misaligned scenarios.
arXiv Detail & Related papers (2023-05-20T17:01:52Z) - Equivariant Multi-Modality Image Fusion [124.11300001864579]
We propose the Equivariant Multi-Modality imAge fusion paradigm for end-to-end self-supervised learning.
Our approach is rooted in the prior knowledge that natural imaging responses are equivariant to certain transformations.
Experiments confirm that EMMA yields high-quality fusion results for infrared-visible and medical images.
arXiv Detail & Related papers (2023-05-19T05:50:24Z) - Multi-modal Gated Mixture of Local-to-Global Experts for Dynamic Image
Fusion [59.19469551774703]
Infrared and visible image fusion aims to integrate comprehensive information from multiple sources to achieve superior performances on various practical tasks.
We propose a dynamic image fusion framework with a multi-modal gated mixture of local-to-global experts.
Our model consists of a Mixture of Local Experts (MoLE) and a Mixture of Global Experts (MoGE) guided by a multi-modal gate.
arXiv Detail & Related papers (2023-02-02T20:06:58Z) - Multispectral image fusion by super pixel statistics [1.4685355149711299]
I address the task of visible color RGB to Near-Infrared (NIR) fusion.
The RGB image captures the color of the scene while the NIR captures details and sees beyond haze and clouds.
The proposed method is designed to produce a fusion that contains both advantages of each spectra.
arXiv Detail & Related papers (2021-12-21T16:19:10Z) - UFA-FUSE: A novel deep supervised and hybrid model for multi-focus image
fusion [4.105749631623888]
Traditional and deep learning-based fusion methods generate the intermediate decision map through a series of post-processing procedures.
Inspired by the image reconstruction techniques based on deep learning, we propose a multi-focus image fusion network framework.
We show that the proposed approach for multi-focus image fusion achieves remarkable fusion performance compared to 19 state-of-the-art fusion methods.
arXiv Detail & Related papers (2021-01-12T14:33:13Z) - Deep Image Compositing [93.75358242750752]
We propose a new method which can automatically generate high-quality image composites without any user input.
Inspired by Laplacian pyramid blending, a dense-connected multi-stream fusion network is proposed to effectively fuse the information from the foreground and background images.
Experiments show that the proposed method can automatically generate high-quality composites and outperforms existing methods both qualitatively and quantitatively.
arXiv Detail & Related papers (2020-11-04T06:12:24Z) - End-to-End Learning for Simultaneously Generating Decision Map and
Multi-Focus Image Fusion Result [7.564462759345851]
The aim of multi-focus image fusion is to gather focused regions of different images to generate a unique all-in-focus fused image.
Most of the existing deep learning structures failed to balance fusion quality and end-to-end implementation convenience.
We propose a cascade network to simultaneously generate decision map and fused result with an end-to-end training procedure.
arXiv Detail & Related papers (2020-10-17T09:09:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.