Beyond Night Visibility: Adaptive Multi-Scale Fusion of Infrared and
Visible Images
- URL: http://arxiv.org/abs/2403.01083v1
- Date: Sat, 2 Mar 2024 03:52:07 GMT
- Title: Beyond Night Visibility: Adaptive Multi-Scale Fusion of Infrared and
Visible Images
- Authors: Shufan Pei, Junhong Lin, Wenxi Liu, Tiesong Zhao and Chia-Wen Lin
- Abstract summary: We propose an Adaptive Multi-scale Fusion network (AMFusion) with infrared and visible images.
First, we separately fuse spatial and semantic features from infrared and visible images, where the former are used for the adjustment of light distribution.
Second, we utilize detection features extracted by a pre-trained backbone that guide the fusion of semantic features.
Third, we propose a new illumination loss to constrain fusion image with normal light intensity.
- Score: 49.75771095302775
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In addition to low light, night images suffer degradation from light effects
(e.g., glare, floodlight, etc). However, existing nighttime visibility
enhancement methods generally focus on low-light regions, which neglects, or
even amplifies the light effects. To address this issue, we propose an Adaptive
Multi-scale Fusion network (AMFusion) with infrared and visible images, which
designs fusion rules according to different illumination regions. First, we
separately fuse spatial and semantic features from infrared and visible images,
where the former are used for the adjustment of light distribution and the
latter are used for the improvement of detection accuracy. Thereby, we obtain
an image free of low light and light effects, which improves the performance of
nighttime object detection. Second, we utilize detection features extracted by
a pre-trained backbone that guide the fusion of semantic features. Hereby, we
design a Detection-guided Semantic Fusion Module (DSFM) to bridge the domain
gap between detection and semantic features. Third, we propose a new
illumination loss to constrain fusion image with normal light intensity.
Experimental results demonstrate the superiority of AMFusion with better visual
quality and detection accuracy. The source code will be released after the peer
review process.
Related papers
- Decomposition-based and Interference Perception for Infrared and Visible
Image Fusion in Complex Scenes [4.919706769234434]
We propose a decomposition-based and interference perception image fusion method.
We classify the pixels of visible image from the degree of scattering of light transmission, based on which we then separate the detail and energy information of the image.
This refined decomposition facilitates the proposed model in identifying more interfering pixels that are in complex scenes.
arXiv Detail & Related papers (2024-02-03T09:27:33Z) - IAIFNet: An Illumination-Aware Infrared and Visible Image Fusion Network [13.11361803763253]
We propose an Illumination-Aware Infrared and Visible Image Fusion Network, named as IAIFNet.
In our framework, an illumination enhancement network first estimates the incident illumination maps of input images.
With the help of proposed adaptive differential fusion module (ADFM) and salient target aware module (STAM), an image fusion network effectively integrates the salient features of the illumination-enhanced infrared and visible images into a fusion image of high visual quality.
arXiv Detail & Related papers (2023-09-26T15:12:29Z) - An Interactively Reinforced Paradigm for Joint Infrared-Visible Image
Fusion and Saliency Object Detection [59.02821429555375]
This research focuses on the discovery and localization of hidden objects in the wild and serves unmanned systems.
Through empirical analysis, infrared and visible image fusion (IVIF) enables hard-to-find objects apparent.
multimodal salient object detection (SOD) accurately delineates the precise spatial location of objects within the picture.
arXiv Detail & Related papers (2023-05-17T06:48:35Z) - Factorized Inverse Path Tracing for Efficient and Accurate
Material-Lighting Estimation [97.0195314255101]
Inverse path tracing is expensive to compute, and ambiguities exist between reflection and emission.
Our Factorized Inverse Path Tracing (FIPT) addresses these challenges by using a factored light transport formulation.
Our algorithm enables accurate material and lighting optimization faster than previous work, and is more effective at resolving ambiguities.
arXiv Detail & Related papers (2023-04-12T07:46:05Z) - NeFII: Inverse Rendering for Reflectance Decomposition with Near-Field
Indirect Illumination [48.42173911185454]
Inverse rendering methods aim to estimate geometry, materials and illumination from multi-view RGB images.
We propose an end-to-end inverse rendering pipeline that decomposes materials and illumination from multi-view images.
arXiv Detail & Related papers (2023-03-29T12:05:19Z) - CoCoNet: Coupled Contrastive Learning Network with Multi-level Feature
Ensemble for Multi-modality Image Fusion [72.8898811120795]
We propose a coupled contrastive learning network, dubbed CoCoNet, to realize infrared and visible image fusion.
Our method achieves state-of-the-art (SOTA) performance under both subjective and objective evaluation.
arXiv Detail & Related papers (2022-11-20T12:02:07Z) - Target-aware Dual Adversarial Learning and a Multi-scenario
Multi-Modality Benchmark to Fuse Infrared and Visible for Object Detection [65.30079184700755]
This study addresses the issue of fusing infrared and visible images that appear differently for object detection.
Previous approaches discover commons underlying the two modalities and fuse upon the common space either by iterative optimization or deep networks.
This paper proposes a bilevel optimization formulation for the joint problem of fusion and detection, and then unrolls to a target-aware Dual Adversarial Learning (TarDAL) network for fusion and a commonly used detection network.
arXiv Detail & Related papers (2022-03-30T11:44:56Z) - Fusion Detection via Distance-Decay IoU and weighted Dempster-Shafer
Evidence Theory [0.0]
A fast multi-source fusion detection framework is proposed in current paper.
A novel distance-decay intersection over union is employed to encode the shape properties of the targets.
The weighted Dempster-Shafer evidence theory is utilized to combine the optical and synthetic aperture radar detection.
arXiv Detail & Related papers (2021-12-06T13:46:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.