Fire and Smoke Detection with Burning Intensity Representation
- URL: http://arxiv.org/abs/2410.16642v1
- Date: Tue, 22 Oct 2024 02:41:37 GMT
- Title: Fire and Smoke Detection with Burning Intensity Representation
- Authors: Xiaoyi Han, Yanfei Wu, Nan Pu, Zunlei Feng, Qifei Zhang, Yijun Bei, Lechao Cheng,
- Abstract summary: An effective Fire and Smoke Detection (FSD) and analysis system is of paramount importance due to the destructive potential of fire disasters.
Many existing FSD methods directly employ generic object detection techniques without considering the transparency of fire and smoke.
A new Attentive Fire and Smoke Detection Model (a-FSDM) is proposed to address this issue.
- Score: 14.738038604117364
- License:
- Abstract: An effective Fire and Smoke Detection (FSD) and analysis system is of paramount importance due to the destructive potential of fire disasters. However, many existing FSD methods directly employ generic object detection techniques without considering the transparency of fire and smoke, which leads to imprecise localization and reduces detection performance. To address this issue, a new Attentive Fire and Smoke Detection Model (a-FSDM) is proposed. This model not only retains the robust feature extraction and fusion capabilities of conventional detection algorithms but also redesigns the detection head specifically for transparent targets in FSD, termed the Attentive Transparency Detection Head (ATDH). In addition, Burning Intensity (BI) is introduced as a pivotal feature for fire-related downstream risk assessments in traditional FSD methodologies. Extensive experiments on multiple FSD datasets showcase the effectiveness and versatility of the proposed FSD model. The project is available at \href{https://xiaoyihan6.github.io/FSD/}{https://xiaoyihan6.github.io/FSD/}.
Related papers
- CCi-YOLOv8n: Enhanced Fire Detection with CARAFE and Context-Guided Modules [0.0]
Fire incidents in urban and forested areas pose serious threats.
We present CCi-YOLOv8n, an enhanced YOLOv8 model with targeted improvements for detecting small fires and smoke.
arXiv Detail & Related papers (2024-11-17T09:31:04Z) - Benchmarking Multi-Scene Fire and Smoke Detection [15.091410770385812]
Current irregularities in existing public Fire and Smoke Detection (FSD) datasets have become a bottleneck in the advancement of FSD technology.
We aim to establish a standardized, realistic, unified, and efficient FSD research platform that mirrors real-life scenes closely.
arXiv Detail & Related papers (2024-10-22T02:19:23Z) - Open-Set Deepfake Detection: A Parameter-Efficient Adaptation Method with Forgery Style Mixture [58.60915132222421]
We introduce an approach that is both general and parameter-efficient for face forgery detection.
We design a forgery-style mixture formulation that augments the diversity of forgery source domains.
We show that the designed model achieves state-of-the-art generalizability with significantly reduced trainable parameters.
arXiv Detail & Related papers (2024-08-23T01:53:36Z) - Coarse-to-Fine Proposal Refinement Framework for Audio Temporal Forgery Detection and Localization [60.899082019130766]
We introduce a frame-level detection network (FDN) and a proposal refinement network (PRN) for audio temporal forgery detection and localization.
FDN aims to mine informative inconsistency cues between real and fake frames to obtain discriminative features that are beneficial for roughly indicating forgery regions.
PRN is responsible for predicting confidence scores and regression offsets to refine the coarse-grained proposals derived from the FDN.
arXiv Detail & Related papers (2024-07-23T15:07:52Z) - Diffusion-Based Particle-DETR for BEV Perception [94.88305708174796]
Bird-Eye-View (BEV) is one of the most widely-used scene representations for visual perception in Autonomous Vehicles (AVs)
Recent diffusion-based methods offer a promising approach to uncertainty modeling for visual perception but fail to effectively detect small objects in the large coverage of the BEV.
Here, we address this problem by combining the diffusion paradigm with current state-of-the-art 3D object detectors in BEV.
arXiv Detail & Related papers (2023-12-18T09:52:14Z) - DiAD: A Diffusion-based Framework for Multi-class Anomaly Detection [55.48770333927732]
We propose a Difusion-based Anomaly Detection (DiAD) framework for multi-class anomaly detection.
It consists of a pixel-space autoencoder, a latent-space Semantic-Guided (SG) network with a connection to the stable diffusion's denoising network, and a feature-space pre-trained feature extractor.
Experiments on MVTec-AD and VisA datasets demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2023-12-11T18:38:28Z) - Wildfire Smoke Detection with Cross Contrast Patch Embedding [5.965059322800441]
The Transformer-based deep networks have increasingly shown significant advantages over CNNs.
Low-level information such as color, transparency and texture is very important for smoke recognition.
The fuzzy boundary of smoke makes the positive and negative label assignment for instances in a dilemma.
arXiv Detail & Related papers (2023-11-16T06:53:03Z) - CrossDF: Improving Cross-Domain Deepfake Detection with Deep Information Decomposition [53.860796916196634]
We propose a Deep Information Decomposition (DID) framework to enhance the performance of Cross-dataset Deepfake Detection (CrossDF)
Unlike most existing deepfake detection methods, our framework prioritizes high-level semantic features over specific visual artifacts.
It adaptively decomposes facial features into deepfake-related and irrelevant information, only using the intrinsic deepfake-related information for real/fake discrimination.
arXiv Detail & Related papers (2023-09-30T12:30:25Z) - FSDNet-An efficient fire detection network for complex scenarios based
on YOLOv3 and DenseNet [8.695064779659031]
This paper proposes a fire detection network called FSDNet (Fire Smoke Detection Network), which consists of a feature extraction module, a fire classification module, and a fire detection module.
The accuracy of FSDNet on the two benchmark datasets is 99.82% and 91.15%, respectively, and the average precision on MS-FS is 86.80%, which is better than the mainstream fire detection methods.
arXiv Detail & Related papers (2023-04-15T15:46:08Z) - Unsupervised Wildfire Change Detection based on Contrastive Learning [1.53934570513443]
The accurate characterization of the severity of the wildfire event contributes to the characterization of the fuel conditions in fire-prone areas.
The aim of this study is to develop an autonomous system built on top of high-resolution multispectral satellite imagery, with an advanced deep learning method for detecting burned area change.
arXiv Detail & Related papers (2022-11-26T20:13:14Z) - ReDFeat: Recoupling Detection and Description for Multimodal Feature
Learning [51.07496081296863]
We recouple independent constraints of detection and description of multimodal feature learning with a mutual weighting strategy.
We propose a detector that possesses a large receptive field and is equipped with learnable non-maximum suppression layers.
We build a benchmark that contains cross visible, infrared, near-infrared and synthetic aperture radar image pairs for evaluating the performance of features in feature matching and image registration tasks.
arXiv Detail & Related papers (2022-05-16T04:24:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.