AE-Net: Autonomous Evolution Image Fusion Method Inspired by Human
Cognitive Mechanism
- URL: http://arxiv.org/abs/2007.08763v1
- Date: Fri, 17 Jul 2020 05:19:51 GMT
- Title: AE-Net: Autonomous Evolution Image Fusion Method Inspired by Human
Cognitive Mechanism
- Authors: Aiqing Fang, Xinbo Zhao, Jiaqi Yang, Shihao Cao, Yanning Zhang
- Abstract summary: We propose a robust and general image fusion method with autonomous evolution ability, denoted with AE-Net.
Through the collaborative optimization of multiple image fusion methods to simulate the cognitive process of human brain, unsupervised learning image fusion task can be transformed into semi-supervised image fusion task or supervised image fusion task.
Our image fusion method can effectively unify the cross-modal image fusion task and the same modal image fusion task, and effectively overcome the difference of data distribution between different datasets.
- Score: 34.57055312296812
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In order to solve the robustness and generality problems of the image fusion
task,inspired by the human brain cognitive mechanism, we propose a robust and
general image fusion method with autonomous evolution ability, and is therefore
denoted with AE-Net. Through the collaborative optimization of multiple image
fusion methods to simulate the cognitive process of human brain, unsupervised
learning image fusion task can be transformed into semi-supervised image fusion
task or supervised image fusion task, thus promoting the evolutionary ability
of network model weight. Firstly, the relationship between human brain
cognitive mechanism and image fusion task is analyzed and a physical model is
established to simulate human brain cognitive mechanism. Secondly, we analyze
existing image fusion methods and image fusion loss functions, select the image
fusion method with complementary features to construct the algorithm module,
establish the multi-loss joint evaluation function to obtain the optimal
solution of algorithm module. The optimal solution of each image is used to
guide the weight training of network model. Our image fusion method can
effectively unify the cross-modal image fusion task and the same modal image
fusion task, and effectively overcome the difference of data distribution
between different datasets. Finally, extensive numerical results verify the
effectiveness and superiority of our method on a variety of image fusion
datasets, including multi-focus dataset, infrared and visi-ble dataset, medical
image dataset and multi-exposure dataset. Comprehensive experiments demonstrate
the superiority of our image fusion method in robustness and generality. In
addition, experimental results also demonstate the effectiveness of human brain
cognitive mechanism to improve the robustness and generality of image fusion.
Related papers
- Fusion from Decomposition: A Self-Supervised Approach for Image Fusion and Beyond [74.96466744512992]
The essence of image fusion is to integrate complementary information from source images.
DeFusion++ produces versatile fused representations that can enhance the quality of image fusion and the effectiveness of downstream high-level vision tasks.
arXiv Detail & Related papers (2024-10-16T06:28:49Z) - From Text to Pixels: A Context-Aware Semantic Synergy Solution for
Infrared and Visible Image Fusion [66.33467192279514]
We introduce a text-guided multi-modality image fusion method that leverages the high-level semantics from textual descriptions to integrate semantics from infrared and visible images.
Our method not only produces visually superior fusion results but also achieves a higher detection mAP over existing methods, achieving state-of-the-art results.
arXiv Detail & Related papers (2023-12-31T08:13:47Z) - A Task-guided, Implicitly-searched and Meta-initialized Deep Model for
Image Fusion [69.10255211811007]
We present a Task-guided, Implicit-searched and Meta- generalizationd (TIM) deep model to address the image fusion problem in a challenging real-world scenario.
Specifically, we propose a constrained strategy to incorporate information from downstream tasks to guide the unsupervised learning process of image fusion.
Within this framework, we then design an implicit search scheme to automatically discover compact architectures for our fusion model with high efficiency.
arXiv Detail & Related papers (2023-05-25T08:54:08Z) - Equivariant Multi-Modality Image Fusion [124.11300001864579]
We propose the Equivariant Multi-Modality imAge fusion paradigm for end-to-end self-supervised learning.
Our approach is rooted in the prior knowledge that natural imaging responses are equivariant to certain transformations.
Experiments confirm that EMMA yields high-quality fusion results for infrared-visible and medical images.
arXiv Detail & Related papers (2023-05-19T05:50:24Z) - DDFM: Denoising Diffusion Model for Multi-Modality Image Fusion [144.9653045465908]
We propose a novel fusion algorithm based on the denoising diffusion probabilistic model (DDPM)
Our approach yields promising fusion results in infrared-visible image fusion and medical image fusion.
arXiv Detail & Related papers (2023-03-13T04:06:42Z) - CoCoNet: Coupled Contrastive Learning Network with Multi-level Feature
Ensemble for Multi-modality Image Fusion [72.8898811120795]
We propose a coupled contrastive learning network, dubbed CoCoNet, to realize infrared and visible image fusion.
Our method achieves state-of-the-art (SOTA) performance under both subjective and objective evaluation.
arXiv Detail & Related papers (2022-11-20T12:02:07Z) - Unsupervised Image Fusion Method based on Feature Mutual Mapping [16.64607158983448]
We propose an unsupervised adaptive image fusion method to address the above issues.
We construct a global map to measure the connections of pixels between the input source images.
Our method achieves superior performance in both visual perception and objective evaluation.
arXiv Detail & Related papers (2022-01-25T07:50:14Z) - TransFuse: A Unified Transformer-based Image Fusion Framework using
Self-supervised Learning [5.849513679510834]
Image fusion is a technique to integrate information from multiple source images with complementary information to improve the richness of a single image.
Two-stage methods avoid the need of large amount of task-specific training data by training encoder-decoder network on large natural image datasets.
We propose a destruction-reconstruction based self-supervised training scheme to encourage the network to learn task-specific features.
arXiv Detail & Related papers (2022-01-19T07:30:44Z) - AE-Netv2: Optimization of Image Fusion Efficiency and Network
Architecture [34.57055312296812]
We propose an textitefficient autonomous evolution image fusion method, dubed by AE-Netv2.
We discuss the influence of different network architecture on image fusion quality and fusion efficiency, which provides a reference for the design of image fusion architecture.
We explore the commonness and characteristics of different image fusion tasks, which provides a research basis for further research on the continuous learning characteristics of human brain in the field of image fusion.
arXiv Detail & Related papers (2020-10-05T08:58:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.