Generation and Recombination for Multifocus Image Fusion with Free
Number of Inputs
- URL: http://arxiv.org/abs/2309.04657v1
- Date: Sat, 9 Sep 2023 01:47:56 GMT
- Title: Generation and Recombination for Multifocus Image Fusion with Free
Number of Inputs
- Authors: Huafeng Li, Dan Wang, Yuxin Huang, Yafei Zhang and Zhengtao Yu
- Abstract summary: Multifocus image fusion is an effective way to overcome the limitation of optical lenses.
Previous methods assume that the focused areas of the two source images are complementary, making it impossible to achieve simultaneous fusion of multiple images.
In GRFusion, focus property detection of each source image can be implemented independently, enabling simultaneous fusion of multiple source images.
- Score: 17.32596568119519
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multifocus image fusion is an effective way to overcome the limitation of
optical lenses. Many existing methods obtain fused results by generating
decision maps. However, such methods often assume that the focused areas of the
two source images are complementary, making it impossible to achieve
simultaneous fusion of multiple images. Additionally, the existing methods
ignore the impact of hard pixels on fusion performance, limiting the visual
quality improvement of fusion image. To address these issues, a combining
generation and recombination model, termed as GRFusion, is proposed. In
GRFusion, focus property detection of each source image can be implemented
independently, enabling simultaneous fusion of multiple source images and
avoiding information loss caused by alternating fusion. This makes GRFusion
free from the number of inputs. To distinguish the hard pixels from the source
images, we achieve the determination of hard pixels by considering the
inconsistency among the detection results of focus areas in source images.
Furthermore, a multi-directional gradient embedding method for generating full
focus images is proposed. Subsequently, a hard-pixel-guided recombination
mechanism for constructing fused result is devised, effectively integrating the
complementary advantages of feature reconstruction-based method and focused
pixel recombination-based method. Extensive experimental results demonstrate
the effectiveness and the superiority of the proposed method.The source code
will be released on https://github.com/xxx/xxx.
Related papers
- Fusion from Decomposition: A Self-Supervised Approach for Image Fusion and Beyond [74.96466744512992]
The essence of image fusion is to integrate complementary information from source images.
DeFusion++ produces versatile fused representations that can enhance the quality of image fusion and the effectiveness of downstream high-level vision tasks.
arXiv Detail & Related papers (2024-10-16T06:28:49Z) - From Text to Pixels: A Context-Aware Semantic Synergy Solution for
Infrared and Visible Image Fusion [66.33467192279514]
We introduce a text-guided multi-modality image fusion method that leverages the high-level semantics from textual descriptions to integrate semantics from infrared and visible images.
Our method not only produces visually superior fusion results but also achieves a higher detection mAP over existing methods, achieving state-of-the-art results.
arXiv Detail & Related papers (2023-12-31T08:13:47Z) - DePF: A Novel Fusion Approach based on Decomposition Pooling for
Infrared and Visible Images [7.11574718614606]
A novel fusion network based on the decomposition pooling (de-pooling) manner is proposed, termed as DePF.
A de-pooling based encoder is designed to extract multi-scale image and detail features of source images at the same time.
The experimental results demonstrate that the proposed method exhibits superior fusion performance over the state-of-the-arts.
arXiv Detail & Related papers (2023-05-27T05:47:14Z) - Searching a Compact Architecture for Robust Multi-Exposure Image Fusion [55.37210629454589]
Two major stumbling blocks hinder the development, including pixel misalignment and inefficient inference.
This study introduces an architecture search-based paradigm incorporating self-alignment and detail repletion modules for robust multi-exposure image fusion.
The proposed method outperforms various competitive schemes, achieving a noteworthy 3.19% improvement in PSNR for general scenarios and an impressive 23.5% enhancement in misaligned scenarios.
arXiv Detail & Related papers (2023-05-20T17:01:52Z) - DDFM: Denoising Diffusion Model for Multi-Modality Image Fusion [144.9653045465908]
We propose a novel fusion algorithm based on the denoising diffusion probabilistic model (DDPM)
Our approach yields promising fusion results in infrared-visible image fusion and medical image fusion.
arXiv Detail & Related papers (2023-03-13T04:06:42Z) - CoCoNet: Coupled Contrastive Learning Network with Multi-level Feature
Ensemble for Multi-modality Image Fusion [72.8898811120795]
We propose a coupled contrastive learning network, dubbed CoCoNet, to realize infrared and visible image fusion.
Our method achieves state-of-the-art (SOTA) performance under both subjective and objective evaluation.
arXiv Detail & Related papers (2022-11-20T12:02:07Z) - Unsupervised Image Fusion Method based on Feature Mutual Mapping [16.64607158983448]
We propose an unsupervised adaptive image fusion method to address the above issues.
We construct a global map to measure the connections of pixels between the input source images.
Our method achieves superior performance in both visual perception and objective evaluation.
arXiv Detail & Related papers (2022-01-25T07:50:14Z) - Image Fusion Transformer [75.71025138448287]
In image fusion, images obtained from different sensors are fused to generate a single image with enhanced information.
In recent years, state-of-the-art methods have adopted Convolution Neural Networks (CNNs) to encode meaningful features for image fusion.
We propose a novel Image Fusion Transformer (IFT) where we develop a transformer-based multi-scale fusion strategy.
arXiv Detail & Related papers (2021-07-19T16:42:49Z) - UFA-FUSE: A novel deep supervised and hybrid model for multi-focus image
fusion [4.105749631623888]
Traditional and deep learning-based fusion methods generate the intermediate decision map through a series of post-processing procedures.
Inspired by the image reconstruction techniques based on deep learning, we propose a multi-focus image fusion network framework.
We show that the proposed approach for multi-focus image fusion achieves remarkable fusion performance compared to 19 state-of-the-art fusion methods.
arXiv Detail & Related papers (2021-01-12T14:33:13Z) - End-to-End Learning for Simultaneously Generating Decision Map and
Multi-Focus Image Fusion Result [7.564462759345851]
The aim of multi-focus image fusion is to gather focused regions of different images to generate a unique all-in-focus fused image.
Most of the existing deep learning structures failed to balance fusion quality and end-to-end implementation convenience.
We propose a cascade network to simultaneously generate decision map and fused result with an end-to-end training procedure.
arXiv Detail & Related papers (2020-10-17T09:09:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.