Decomposition-based and Interference Perception for Infrared and Visible
Image Fusion in Complex Scenes
- URL: http://arxiv.org/abs/2402.02096v1
- Date: Sat, 3 Feb 2024 09:27:33 GMT
- Title: Decomposition-based and Interference Perception for Infrared and Visible
Image Fusion in Complex Scenes
- Authors: Xilai Li, Xiaosong Li, Haishu Tan
- Abstract summary: We propose a decomposition-based and interference perception image fusion method.
We classify the pixels of visible image from the degree of scattering of light transmission, based on which we then separate the detail and energy information of the image.
This refined decomposition facilitates the proposed model in identifying more interfering pixels that are in complex scenes.
- Score: 4.919706769234434
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Infrared and visible image fusion has emerged as a prominent research in
computer vision. However, little attention has been paid on complex scenes
fusion, causing existing techniques to produce sub-optimal results when suffers
from real interferences. To fill this gap, we propose a decomposition-based and
interference perception image fusion method. Specifically, we classify the
pixels of visible image from the degree of scattering of light transmission,
based on which we then separate the detail and energy information of the image.
This refined decomposition facilitates the proposed model in identifying more
interfering pixels that are in complex scenes. To strike a balance between
denoising and detail preservation, we propose an adaptive denoising scheme for
fusing detail components. Meanwhile, we propose a new weighted fusion rule by
considering the distribution of image energy information from the perspective
of multiple directions. Extensive experiments in complex scenes fusions cover
adverse weathers, noise, blur, overexposure, fire, as well as downstream tasks
including semantic segmentation, object detection, salient object detection and
depth estimation, consistently indicate the effectiveness and superiority of
the proposed method compared with the recent representative methods.
Related papers
- Beyond Night Visibility: Adaptive Multi-Scale Fusion of Infrared and
Visible Images [49.75771095302775]
We propose an Adaptive Multi-scale Fusion network (AMFusion) with infrared and visible images.
First, we separately fuse spatial and semantic features from infrared and visible images, where the former are used for the adjustment of light distribution.
Second, we utilize detection features extracted by a pre-trained backbone that guide the fusion of semantic features.
Third, we propose a new illumination loss to constrain fusion image with normal light intensity.
arXiv Detail & Related papers (2024-03-02T03:52:07Z) - A Multi-scale Information Integration Framework for Infrared and Visible Image Fusion [46.545365049713105]
Infrared and visible image fusion aims at generating a fused image containing intensity and detail information of source images.
Existing methods mostly adopt a simple weight in the loss function to decide the information retention of each modality.
We propose a multi-scale dual attention (MDA) framework for infrared and visible image fusion.
arXiv Detail & Related papers (2023-12-07T14:40:05Z) - Deep-learning-based decomposition of overlapping-sparse images: application at the vertex of neutrino interactions [2.5521723486759407]
This paper presents a solution that leverages the power of deep learning to accurately extract individual objects within multi-dimensional overlapping-sparse images.
It is a direct application in high-energy physics with decomposition of overlaid elementary particles obtained from imaging detectors.
arXiv Detail & Related papers (2023-10-30T16:12:25Z) - PAIF: Perception-Aware Infrared-Visible Image Fusion for Attack-Tolerant
Semantic Segmentation [50.556961575275345]
We propose a perception-aware fusion framework to promote segmentation robustness in adversarial scenes.
We show that our scheme substantially enhances the robustness, with gains of 15.3% mIOU, compared with advanced competitors.
arXiv Detail & Related papers (2023-08-08T01:55:44Z) - Searching a Compact Architecture for Robust Multi-Exposure Image Fusion [55.37210629454589]
Two major stumbling blocks hinder the development, including pixel misalignment and inefficient inference.
This study introduces an architecture search-based paradigm incorporating self-alignment and detail repletion modules for robust multi-exposure image fusion.
The proposed method outperforms various competitive schemes, achieving a noteworthy 3.19% improvement in PSNR for general scenarios and an impressive 23.5% enhancement in misaligned scenarios.
arXiv Detail & Related papers (2023-05-20T17:01:52Z) - Breaking Modality Disparity: Harmonized Representation for Infrared and
Visible Image Registration [66.33746403815283]
We propose a scene-adaptive infrared and visible image registration.
We employ homography to simulate the deformation between different planes.
We propose the first ground truth available misaligned infrared and visible image dataset.
arXiv Detail & Related papers (2023-04-12T06:49:56Z) - CoCoNet: Coupled Contrastive Learning Network with Multi-level Feature
Ensemble for Multi-modality Image Fusion [72.8898811120795]
We propose a coupled contrastive learning network, dubbed CoCoNet, to realize infrared and visible image fusion.
Our method achieves state-of-the-art (SOTA) performance under both subjective and objective evaluation.
arXiv Detail & Related papers (2022-11-20T12:02:07Z) - Visible and Near Infrared Image Fusion Based on Texture Information [4.718295968108302]
A novel visible and near-infrared fusion method based on texture information is proposed to enhance unstructured environmental images.
It aims at the problems of artifact, information loss and noise in traditional visible and near infrared image fusion methods.
The experimental results demonstrate that the proposed algorithm can preserve the spectral characteristics and the unique information of visible and near-infrared images.
arXiv Detail & Related papers (2022-07-22T09:02:17Z) - Target-aware Dual Adversarial Learning and a Multi-scenario
Multi-Modality Benchmark to Fuse Infrared and Visible for Object Detection [65.30079184700755]
This study addresses the issue of fusing infrared and visible images that appear differently for object detection.
Previous approaches discover commons underlying the two modalities and fuse upon the common space either by iterative optimization or deep networks.
This paper proposes a bilevel optimization formulation for the joint problem of fusion and detection, and then unrolls to a target-aware Dual Adversarial Learning (TarDAL) network for fusion and a commonly used detection network.
arXiv Detail & Related papers (2022-03-30T11:44:56Z) - When Image Decomposition Meets Deep Learning: A Novel Infrared and
Visible Image Fusion Method [27.507158159317417]
Infrared and visible image fusion is a hot topic in image processing and image enhancement.
We propose a novel dual-stream auto-encoder based fusion network.
arXiv Detail & Related papers (2020-09-02T19:32:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.