Interactive Feature Embedding for Infrared and Visible Image Fusion
- URL: http://arxiv.org/abs/2211.04877v1
- Date: Wed, 9 Nov 2022 13:34:42 GMT
- Title: Interactive Feature Embedding for Infrared and Visible Image Fusion
- Authors: Fan Zhao and Wenda Zhao and Huchuan Lu
- Abstract summary: General deep learning-based methods for infrared and visible image fusion rely on the unsupervised mechanism for vital information retention.
We propose a novel interactive feature embedding in self-supervised learning framework for infrared and visible image fusion.
- Score: 94.77188069479155
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: General deep learning-based methods for infrared and visible image fusion
rely on the unsupervised mechanism for vital information retention by utilizing
elaborately designed loss functions. However, the unsupervised mechanism
depends on a well designed loss function, which cannot guarantee that all vital
information of source images is sufficiently extracted. In this work, we
propose a novel interactive feature embedding in self-supervised learning
framework for infrared and visible image fusion, attempting to overcome the
issue of vital information degradation. With the help of self-supervised
learning framework, hierarchical representations of source images can be
efficiently extracted. In particular, interactive feature embedding models are
tactfully designed to build a bridge between the self-supervised learning and
infrared and visible image fusion learning, achieving vital information
retention. Qualitative and quantitative evaluations exhibit that the proposed
method performs favorably against state-of-the-art methods.
Related papers
- Infrared-Assisted Single-Stage Framework for Joint Restoration and Fusion of Visible and Infrared Images under Hazy Conditions [9.415977819944246]
We propose a joint learning framework that utilizes infrared image for the restoration and fusion of hazy IR-VIS images.
Our method effectively fuses IR-VIS images while removing haze, yielding clear, haze-free fusion results.
arXiv Detail & Related papers (2024-11-16T02:57:12Z) - Fusion of Infrared and Visible Images based on Spatial-Channel
Attentional Mechanism [3.388001684915793]
We present AMFusionNet, an innovative approach to infrared and visible image fusion (IVIF)
By assimilating thermal details from infrared images with texture features from visible sources, our method produces images enriched with comprehensive information.
Our method outperforms state-of-the-art algorithms in terms of quality and quantity.
arXiv Detail & Related papers (2023-08-25T21:05:11Z) - Free-ATM: Exploring Unsupervised Learning on Diffusion-Generated Images
with Free Attention Masks [64.67735676127208]
Text-to-image diffusion models have shown great potential for benefiting image recognition.
Although promising, there has been inadequate exploration dedicated to unsupervised learning on diffusion-generated images.
We introduce customized solutions by fully exploiting the aforementioned free attention masks.
arXiv Detail & Related papers (2023-08-13T10:07:46Z) - PAIF: Perception-Aware Infrared-Visible Image Fusion for Attack-Tolerant
Semantic Segmentation [50.556961575275345]
We propose a perception-aware fusion framework to promote segmentation robustness in adversarial scenes.
We show that our scheme substantially enhances the robustness, with gains of 15.3% mIOU, compared with advanced competitors.
arXiv Detail & Related papers (2023-08-08T01:55:44Z) - An Interactively Reinforced Paradigm for Joint Infrared-Visible Image
Fusion and Saliency Object Detection [59.02821429555375]
This research focuses on the discovery and localization of hidden objects in the wild and serves unmanned systems.
Through empirical analysis, infrared and visible image fusion (IVIF) enables hard-to-find objects apparent.
multimodal salient object detection (SOD) accurately delineates the precise spatial location of objects within the picture.
arXiv Detail & Related papers (2023-05-17T06:48:35Z) - Breaking Free from Fusion Rule: A Fully Semantic-driven Infrared and
Visible Image Fusion [51.22863068854784]
Infrared and visible image fusion plays a vital role in the field of computer vision.
Previous approaches make efforts to design various fusion rules in the loss functions.
We develop a semantic-level fusion network to sufficiently utilize the semantic guidance.
arXiv Detail & Related papers (2022-11-22T13:59:59Z) - CoCoNet: Coupled Contrastive Learning Network with Multi-level Feature
Ensemble for Multi-modality Image Fusion [72.8898811120795]
We propose a coupled contrastive learning network, dubbed CoCoNet, to realize infrared and visible image fusion.
Our method achieves state-of-the-art (SOTA) performance under both subjective and objective evaluation.
arXiv Detail & Related papers (2022-11-20T12:02:07Z) - Image super-resolution reconstruction based on attention mechanism and
feature fusion [3.42658286826597]
A network structure based on attention mechanism and multi-scale feature fusion is proposed.
Experimental results show that the proposed method can achieve better performance over other representative super-resolution reconstruction algorithms.
arXiv Detail & Related papers (2020-04-08T11:20:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.