Assessing the applicability of Deep Learning-based visible-infrared
fusion methods for fire imagery
- URL: http://arxiv.org/abs/2101.11745v1
- Date: Wed, 27 Jan 2021 23:53:36 GMT
- Title: Assessing the applicability of Deep Learning-based visible-infrared
fusion methods for fire imagery
- Authors: J. F. Cipri\'an-S\'anchez and G. Ochoa-Ruiz and M. Gonzalez-Mendoza
and L. Rossi
- Abstract summary: wildfire detection is of paramount importance to avoid as much damage as possible to the environment, properties, and lives.
Deep Learning models that can leverage both visible and infrared information have the potential to display state-of-the-art performance.
Most DL-based image fusion methods have not been evaluated in the domain of fire imagery.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Early wildfire detection is of paramount importance to avoid as much damage
as possible to the environment, properties, and lives. Deep Learning (DL)
models that can leverage both visible and infrared information have the
potential to display state-of-the-art performance, with lower false-positive
rates than existing techniques. However, most DL-based image fusion methods
have not been evaluated in the domain of fire imagery. Additionally, to the
best of our knowledge, no publicly available dataset contains visible-infrared
fused fire images. There is a growing interest in DL-based image fusion
techniques due to their reduced complexity. Due to the latter, we select three
state-of-the-art, DL-based image fusion techniques and evaluate them for the
specific task of fire image fusion. We compare the performance of these methods
on selected metrics. Finally, we also present an extension to one of the said
methods, that we called FIRe-GAN, that improves the generation of artificial
infrared images and fused ones on selected metrics.
Related papers
- Beyond Night Visibility: Adaptive Multi-Scale Fusion of Infrared and
Visible Images [49.75771095302775]
We propose an Adaptive Multi-scale Fusion network (AMFusion) with infrared and visible images.
First, we separately fuse spatial and semantic features from infrared and visible images, where the former are used for the adjustment of light distribution.
Second, we utilize detection features extracted by a pre-trained backbone that guide the fusion of semantic features.
Third, we propose a new illumination loss to constrain fusion image with normal light intensity.
arXiv Detail & Related papers (2024-03-02T03:52:07Z) - IAIFNet: An Illumination-Aware Infrared and Visible Image Fusion Network [13.11361803763253]
We propose an Illumination-Aware Infrared and Visible Image Fusion Network, named as IAIFNet.
In our framework, an illumination enhancement network first estimates the incident illumination maps of input images.
With the help of proposed adaptive differential fusion module (ADFM) and salient target aware module (STAM), an image fusion network effectively integrates the salient features of the illumination-enhanced infrared and visible images into a fusion image of high visual quality.
arXiv Detail & Related papers (2023-09-26T15:12:29Z) - SSPFusion: A Semantic Structure-Preserving Approach for Infrared and
Visible Image Fusion [30.55433673796615]
Most existing learning-based infrared and visible image fusion (IVIF) methods exhibit massive redundant information in the fusion images.
We propose a semantic structure-preserving approach for IVIF, namely SSPFusion.
Our method is able to generate high-quality fusion images from pairs of infrared and visible images, which can boost the performance of downstream computer-vision tasks.
arXiv Detail & Related papers (2023-09-26T08:13:32Z) - PAIF: Perception-Aware Infrared-Visible Image Fusion for Attack-Tolerant
Semantic Segmentation [50.556961575275345]
We propose a perception-aware fusion framework to promote segmentation robustness in adversarial scenes.
We show that our scheme substantially enhances the robustness, with gains of 15.3% mIOU, compared with advanced competitors.
arXiv Detail & Related papers (2023-08-08T01:55:44Z) - LLDiffusion: Learning Degradation Representations in Diffusion Models
for Low-Light Image Enhancement [118.83316133601319]
Current deep learning methods for low-light image enhancement (LLIE) typically rely on pixel-wise mapping learned from paired data.
We propose a degradation-aware learning scheme for LLIE using diffusion models, which effectively integrates degradation and image priors into the diffusion process.
arXiv Detail & Related papers (2023-07-27T07:22:51Z) - Searching a Compact Architecture for Robust Multi-Exposure Image Fusion [55.37210629454589]
Two major stumbling blocks hinder the development, including pixel misalignment and inefficient inference.
This study introduces an architecture search-based paradigm incorporating self-alignment and detail repletion modules for robust multi-exposure image fusion.
The proposed method outperforms various competitive schemes, achieving a noteworthy 3.19% improvement in PSNR for general scenarios and an impressive 23.5% enhancement in misaligned scenarios.
arXiv Detail & Related papers (2023-05-20T17:01:52Z) - CoCoNet: Coupled Contrastive Learning Network with Multi-level Feature
Ensemble for Multi-modality Image Fusion [72.8898811120795]
We propose a coupled contrastive learning network, dubbed CoCoNet, to realize infrared and visible image fusion.
Our method achieves state-of-the-art (SOTA) performance under both subjective and objective evaluation.
arXiv Detail & Related papers (2022-11-20T12:02:07Z) - Target-aware Dual Adversarial Learning and a Multi-scenario
Multi-Modality Benchmark to Fuse Infrared and Visible for Object Detection [65.30079184700755]
This study addresses the issue of fusing infrared and visible images that appear differently for object detection.
Previous approaches discover commons underlying the two modalities and fuse upon the common space either by iterative optimization or deep networks.
This paper proposes a bilevel optimization formulation for the joint problem of fusion and detection, and then unrolls to a target-aware Dual Adversarial Learning (TarDAL) network for fusion and a commonly used detection network.
arXiv Detail & Related papers (2022-03-30T11:44:56Z) - A Deep Decomposition Network for Image Processing: A Case Study for
Visible and Infrared Image Fusion [38.17268441062239]
We propose a new image decomposition method based on convolutional neural network.
We input infrared image and visible light image and decompose them into three high-frequency feature images and a low-frequency feature image respectively.
The two sets of feature images are fused using a specific fusion strategy to obtain fusion feature images.
arXiv Detail & Related papers (2021-02-21T06:34:33Z) - A Dual-branch Network for Infrared and Visible Image Fusion [20.15854042473049]
We propose a new method based on dense blocks and GANs.
We directly insert the input image-visible light image in each layer of the entire network.
Our experiments show that the fused images obtained by our approach achieve good score based on multiple evaluation indicators.
arXiv Detail & Related papers (2021-01-24T04:18:32Z) - Bayesian Fusion for Infrared and Visible Images [26.64101343489016]
In this paper, a novel Bayesian fusion model is established for infrared and visible images.
We aim at making the fused image satisfy human visual system.
Compared with the previous methods, the novel model can generate better fused images with high-light targets and rich texture details.
arXiv Detail & Related papers (2020-05-12T14:57:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.