CoCoNet: Coupled Contrastive Learning Network with Multi-level Feature
Ensemble for Multi-modality Image Fusion
- URL: http://arxiv.org/abs/2211.10960v2
- Date: Sat, 14 Oct 2023 07:39:29 GMT
- Title: CoCoNet: Coupled Contrastive Learning Network with Multi-level Feature
Ensemble for Multi-modality Image Fusion
- Authors: Jinyuan Liu, Runjia Lin, Guanyao Wu, Risheng Liu, Zhongxuan Luo, Xin
Fan
- Abstract summary: We propose a coupled contrastive learning network, dubbed CoCoNet, to realize infrared and visible image fusion.
Our method achieves state-of-the-art (SOTA) performance under both subjective and objective evaluation.
- Score: 72.8898811120795
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Infrared and visible image fusion targets to provide an informative image by
combining complementary information from different sensors. Existing
learning-based fusion approaches attempt to construct various loss functions to
preserve complementary features, while neglecting to discover the
inter-relationship between the two modalities, leading to redundant or even
invalid information on the fusion results. Moreover, most methods focus on
strengthening the network with an increase in depth while neglecting the
importance of feature transmission, causing vital information degeneration. To
alleviate these issues, we propose a coupled contrastive learning network,
dubbed CoCoNet, to realize infrared and visible image fusion in an end-to-end
manner. Concretely, to simultaneously retain typical features from both
modalities and to avoid artifacts emerging on the fused result, we develop a
coupled contrastive constraint in our loss function. In a fused image, its
foreground target / background detail part is pulled close to the infrared /
visible source and pushed far away from the visible / infrared source in the
representation space. We further exploit image characteristics to provide
data-sensitive weights, allowing our loss function to build a more reliable
relationship with source images. A multi-level attention module is established
to learn rich hierarchical feature representation and to comprehensively
transfer features in the fusion process. We also apply the proposed CoCoNet on
medical image fusion of different types, e.g., magnetic resonance image,
positron emission tomography image, and single photon emission computed
tomography image. Extensive experiments demonstrate that our method achieves
state-of-the-art (SOTA) performance under both subjective and objective
evaluation, especially in preserving prominent targets and recovering vital
textural details.
Related papers
- From Text to Pixels: A Context-Aware Semantic Synergy Solution for
Infrared and Visible Image Fusion [66.33467192279514]
We introduce a text-guided multi-modality image fusion method that leverages the high-level semantics from textual descriptions to integrate semantics from infrared and visible images.
Our method not only produces visually superior fusion results but also achieves a higher detection mAP over existing methods, achieving state-of-the-art results.
arXiv Detail & Related papers (2023-12-31T08:13:47Z) - A Multi-scale Information Integration Framework for Infrared and Visible
Image Fusion [50.84746752058516]
Infrared and visible image fusion aims at generating a fused image containing intensity and detail information of source images.
Existing methods mostly adopt a simple weight in the loss function to decide the information retention of each modality.
We propose a multi-scale dual attention (MDA) framework for infrared and visible image fusion.
arXiv Detail & Related papers (2023-12-07T14:40:05Z) - PAIF: Perception-Aware Infrared-Visible Image Fusion for Attack-Tolerant
Semantic Segmentation [50.556961575275345]
We propose a perception-aware fusion framework to promote segmentation robustness in adversarial scenes.
We show that our scheme substantially enhances the robustness, with gains of 15.3% mIOU, compared with advanced competitors.
arXiv Detail & Related papers (2023-08-08T01:55:44Z) - An Interactively Reinforced Paradigm for Joint Infrared-Visible Image
Fusion and Saliency Object Detection [59.02821429555375]
This research focuses on the discovery and localization of hidden objects in the wild and serves unmanned systems.
Through empirical analysis, infrared and visible image fusion (IVIF) enables hard-to-find objects apparent.
multimodal salient object detection (SOD) accurately delineates the precise spatial location of objects within the picture.
arXiv Detail & Related papers (2023-05-17T06:48:35Z) - Interactive Feature Embedding for Infrared and Visible Image Fusion [94.77188069479155]
General deep learning-based methods for infrared and visible image fusion rely on the unsupervised mechanism for vital information retention.
We propose a novel interactive feature embedding in self-supervised learning framework for infrared and visible image fusion.
arXiv Detail & Related papers (2022-11-09T13:34:42Z) - Infrared and Visible Image Fusion via Interactive Compensatory Attention
Adversarial Learning [7.995162257955025]
We propose a novel end-to-end mode based on generative adversarial training to achieve better fusion balance.
In particular, in the generator, we construct a multi-level encoder-decoder network with a triple path, and adopt infrared and visible paths to provide additional intensity and information gradient.
In addition, dual discriminators are designed to identify the similar distribution between fused result and source images, and the generator is optimized to produce a more balanced result.
arXiv Detail & Related papers (2022-03-29T08:28:14Z) - Unsupervised Image Fusion Method based on Feature Mutual Mapping [16.64607158983448]
We propose an unsupervised adaptive image fusion method to address the above issues.
We construct a global map to measure the connections of pixels between the input source images.
Our method achieves superior performance in both visual perception and objective evaluation.
arXiv Detail & Related papers (2022-01-25T07:50:14Z) - A Dual-branch Network for Infrared and Visible Image Fusion [20.15854042473049]
We propose a new method based on dense blocks and GANs.
We directly insert the input image-visible light image in each layer of the entire network.
Our experiments show that the fused images obtained by our approach achieve good score based on multiple evaluation indicators.
arXiv Detail & Related papers (2021-01-24T04:18:32Z) - When Image Decomposition Meets Deep Learning: A Novel Infrared and
Visible Image Fusion Method [27.507158159317417]
Infrared and visible image fusion is a hot topic in image processing and image enhancement.
We propose a novel dual-stream auto-encoder based fusion network.
arXiv Detail & Related papers (2020-09-02T19:32:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.