UFA-FUSE: A novel deep supervised and hybrid model for multi-focus image
fusion
- URL: http://arxiv.org/abs/2101.04506v1
- Date: Tue, 12 Jan 2021 14:33:13 GMT
- Title: UFA-FUSE: A novel deep supervised and hybrid model for multi-focus image
fusion
- Authors: Yongsheng Zang, Dongming Zhou, Changcheng Wang, Rencan Nie, and Yanbu
Guo
- Abstract summary: Traditional and deep learning-based fusion methods generate the intermediate decision map through a series of post-processing procedures.
Inspired by the image reconstruction techniques based on deep learning, we propose a multi-focus image fusion network framework.
We show that the proposed approach for multi-focus image fusion achieves remarkable fusion performance compared to 19 state-of-the-art fusion methods.
- Score: 4.105749631623888
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Traditional and deep learning-based fusion methods generated the intermediate
decision map to obtain the fusion image through a series of post-processing
procedures. However, the fusion results generated by these methods are easy to
lose some source image details or results in artifacts. Inspired by the image
reconstruction techniques based on deep learning, we propose a multi-focus
image fusion network framework without any post-processing to solve these
problems in the end-to-end and supervised learning way. To sufficiently train
the fusion model, we have generated a large-scale multi-focus image dataset
with ground-truth fusion images. What's more, to obtain a more informative
fusion image, we further designed a novel fusion strategy based on unity fusion
attention, which is composed of a channel attention module and a spatial
attention module. Specifically, the proposed fusion approach mainly comprises
three key components: feature extraction, feature fusion and image
reconstruction. We firstly utilize seven convolutional blocks to extract the
image features from source images. Then, the extracted convolutional features
are fused by the proposed fusion strategy in the feature fusion layer. Finally,
the fused image features are reconstructed by four convolutional blocks.
Experimental results demonstrate that the proposed approach for multi-focus
image fusion achieves remarkable fusion performance compared to 19
state-of-the-art fusion methods.
Related papers
- Fusion from Decomposition: A Self-Supervised Approach for Image Fusion and Beyond [74.96466744512992]
The essence of image fusion is to integrate complementary information from source images.
DeFusion++ produces versatile fused representations that can enhance the quality of image fusion and the effectiveness of downstream high-level vision tasks.
arXiv Detail & Related papers (2024-10-16T06:28:49Z) - Generation and Recombination for Multifocus Image Fusion with Free
Number of Inputs [17.32596568119519]
Multifocus image fusion is an effective way to overcome the limitation of optical lenses.
Previous methods assume that the focused areas of the two source images are complementary, making it impossible to achieve simultaneous fusion of multiple images.
In GRFusion, focus property detection of each source image can be implemented independently, enabling simultaneous fusion of multiple source images.
arXiv Detail & Related papers (2023-09-09T01:47:56Z) - DePF: A Novel Fusion Approach based on Decomposition Pooling for
Infrared and Visible Images [7.11574718614606]
A novel fusion network based on the decomposition pooling (de-pooling) manner is proposed, termed as DePF.
A de-pooling based encoder is designed to extract multi-scale image and detail features of source images at the same time.
The experimental results demonstrate that the proposed method exhibits superior fusion performance over the state-of-the-arts.
arXiv Detail & Related papers (2023-05-27T05:47:14Z) - A Task-guided, Implicitly-searched and Meta-initialized Deep Model for
Image Fusion [69.10255211811007]
We present a Task-guided, Implicit-searched and Meta- generalizationd (TIM) deep model to address the image fusion problem in a challenging real-world scenario.
Specifically, we propose a constrained strategy to incorporate information from downstream tasks to guide the unsupervised learning process of image fusion.
Within this framework, we then design an implicit search scheme to automatically discover compact architectures for our fusion model with high efficiency.
arXiv Detail & Related papers (2023-05-25T08:54:08Z) - Searching a Compact Architecture for Robust Multi-Exposure Image Fusion [55.37210629454589]
Two major stumbling blocks hinder the development, including pixel misalignment and inefficient inference.
This study introduces an architecture search-based paradigm incorporating self-alignment and detail repletion modules for robust multi-exposure image fusion.
The proposed method outperforms various competitive schemes, achieving a noteworthy 3.19% improvement in PSNR for general scenarios and an impressive 23.5% enhancement in misaligned scenarios.
arXiv Detail & Related papers (2023-05-20T17:01:52Z) - Equivariant Multi-Modality Image Fusion [124.11300001864579]
We propose the Equivariant Multi-Modality imAge fusion paradigm for end-to-end self-supervised learning.
Our approach is rooted in the prior knowledge that natural imaging responses are equivariant to certain transformations.
Experiments confirm that EMMA yields high-quality fusion results for infrared-visible and medical images.
arXiv Detail & Related papers (2023-05-19T05:50:24Z) - CoCoNet: Coupled Contrastive Learning Network with Multi-level Feature
Ensemble for Multi-modality Image Fusion [72.8898811120795]
We propose a coupled contrastive learning network, dubbed CoCoNet, to realize infrared and visible image fusion.
Our method achieves state-of-the-art (SOTA) performance under both subjective and objective evaluation.
arXiv Detail & Related papers (2022-11-20T12:02:07Z) - Unsupervised Image Fusion Method based on Feature Mutual Mapping [16.64607158983448]
We propose an unsupervised adaptive image fusion method to address the above issues.
We construct a global map to measure the connections of pixels between the input source images.
Our method achieves superior performance in both visual perception and objective evaluation.
arXiv Detail & Related papers (2022-01-25T07:50:14Z) - Image Fusion Transformer [75.71025138448287]
In image fusion, images obtained from different sensors are fused to generate a single image with enhanced information.
In recent years, state-of-the-art methods have adopted Convolution Neural Networks (CNNs) to encode meaningful features for image fusion.
We propose a novel Image Fusion Transformer (IFT) where we develop a transformer-based multi-scale fusion strategy.
arXiv Detail & Related papers (2021-07-19T16:42:49Z) - End-to-End Learning for Simultaneously Generating Decision Map and
Multi-Focus Image Fusion Result [7.564462759345851]
The aim of multi-focus image fusion is to gather focused regions of different images to generate a unique all-in-focus fused image.
Most of the existing deep learning structures failed to balance fusion quality and end-to-end implementation convenience.
We propose a cascade network to simultaneously generate decision map and fused result with an end-to-end training procedure.
arXiv Detail & Related papers (2020-10-17T09:09:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.