AE-Netv2: Optimization of Image Fusion Efficiency and Network
Architecture
- URL: http://arxiv.org/abs/2010.01863v2
- Date: Tue, 6 Oct 2020 07:58:49 GMT
- Title: AE-Netv2: Optimization of Image Fusion Efficiency and Network
Architecture
- Authors: Aiqing Fang, Xinbo Zhao, Jiaqi Yang, Beibei Qin, Yanning Zhang
- Abstract summary: We propose an textitefficient autonomous evolution image fusion method, dubed by AE-Netv2.
We discuss the influence of different network architecture on image fusion quality and fusion efficiency, which provides a reference for the design of image fusion architecture.
We explore the commonness and characteristics of different image fusion tasks, which provides a research basis for further research on the continuous learning characteristics of human brain in the field of image fusion.
- Score: 34.57055312296812
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing image fusion methods pay few research attention to image fusion
efficiency and network architecture. However, the efficiency and accuracy of
image fusion has an important impact in practical applications. To solve this
problem, we propose an \textit{efficient autonomous evolution image fusion
method, dubed by AE-Netv2}. Different from other image fusion methods based on
deep learning, AE-Netv2 is inspired by human brain cognitive mechanism.
Firstly, we discuss the influence of different network architecture on image
fusion quality and fusion efficiency, which provides a reference for the design
of image fusion architecture. Secondly, we explore the influence of pooling
layer on image fusion task and propose an image fusion method with pooling
layer. Finally, we explore the commonness and characteristics of different
image fusion tasks, which provides a research basis for further research on the
continuous learning characteristics of human brain in the field of image
fusion. Comprehensive experiments demonstrate the superiority of AE-Netv2
compared with state-of-the-art methods in different fusion tasks at a real time
speed of 100+ FPS on GTX 2070. Among all tested methods based on deep learning,
AE-Netv2 has the faster speed, the smaller model size and the better
robustness.
Related papers
- Fusion-Mamba for Cross-modality Object Detection [63.56296480951342]
Cross-modality fusing information from different modalities effectively improves object detection performance.
We design a Fusion-Mamba block (FMB) to map cross-modal features into a hidden state space for interaction.
Our proposed approach outperforms the state-of-the-art methods on $m$AP with 5.9% on $M3FD$ and 4.9% on FLIR-Aligned datasets.
arXiv Detail & Related papers (2024-04-14T05:28:46Z) - FusionMamba: Efficient Image Fusion with State Space Model [35.57157248152558]
Image fusion aims to generate a high-resolution multi/hyper-spectral image with limited spectral information and a low-resolution image with abundant spectral data.
Current deep learning (DL)-based methods for image fusion rely on CNNs or Transformers to extract features and merge different types of data.
We propose FusionMamba, an innovative method for efficient image fusion.
arXiv Detail & Related papers (2024-04-11T17:29:56Z) - From Text to Pixels: A Context-Aware Semantic Synergy Solution for
Infrared and Visible Image Fusion [66.33467192279514]
We introduce a text-guided multi-modality image fusion method that leverages the high-level semantics from textual descriptions to integrate semantics from infrared and visible images.
Our method not only produces visually superior fusion results but also achieves a higher detection mAP over existing methods, achieving state-of-the-art results.
arXiv Detail & Related papers (2023-12-31T08:13:47Z) - A Task-guided, Implicitly-searched and Meta-initialized Deep Model for
Image Fusion [69.10255211811007]
We present a Task-guided, Implicit-searched and Meta- generalizationd (TIM) deep model to address the image fusion problem in a challenging real-world scenario.
Specifically, we propose a constrained strategy to incorporate information from downstream tasks to guide the unsupervised learning process of image fusion.
Within this framework, we then design an implicit search scheme to automatically discover compact architectures for our fusion model with high efficiency.
arXiv Detail & Related papers (2023-05-25T08:54:08Z) - Searching a Compact Architecture for Robust Multi-Exposure Image Fusion [55.37210629454589]
Two major stumbling blocks hinder the development, including pixel misalignment and inefficient inference.
This study introduces an architecture search-based paradigm incorporating self-alignment and detail repletion modules for robust multi-exposure image fusion.
The proposed method outperforms various competitive schemes, achieving a noteworthy 3.19% improvement in PSNR for general scenarios and an impressive 23.5% enhancement in misaligned scenarios.
arXiv Detail & Related papers (2023-05-20T17:01:52Z) - LRRNet: A Novel Representation Learning Guided Fusion Network for
Infrared and Visible Images [98.36300655482196]
We formulate the fusion task mathematically, and establish a connection between its optimal solution and the network architecture that can implement it.
In particular we adopt a learnable representation approach to the fusion task, in which the construction of the fusion network architecture is guided by the optimisation algorithm producing the learnable model.
Based on this novel network architecture, an end-to-end lightweight fusion network is constructed to fuse infrared and visible light images.
arXiv Detail & Related papers (2023-04-11T12:11:23Z) - Unsupervised Image Fusion Method based on Feature Mutual Mapping [16.64607158983448]
We propose an unsupervised adaptive image fusion method to address the above issues.
We construct a global map to measure the connections of pixels between the input source images.
Our method achieves superior performance in both visual perception and objective evaluation.
arXiv Detail & Related papers (2022-01-25T07:50:14Z) - AE-Net: Autonomous Evolution Image Fusion Method Inspired by Human
Cognitive Mechanism [34.57055312296812]
We propose a robust and general image fusion method with autonomous evolution ability, denoted with AE-Net.
Through the collaborative optimization of multiple image fusion methods to simulate the cognitive process of human brain, unsupervised learning image fusion task can be transformed into semi-supervised image fusion task or supervised image fusion task.
Our image fusion method can effectively unify the cross-modal image fusion task and the same modal image fusion task, and effectively overcome the difference of data distribution between different datasets.
arXiv Detail & Related papers (2020-07-17T05:19:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.