Uncertainty-Aware Post-Detection Framework for Enhanced Fire and Smoke Detection in Compact Deep Learning Models
- URL: http://arxiv.org/abs/2510.10108v1
- Date: Sat, 11 Oct 2025 08:36:57 GMT
- Title: Uncertainty-Aware Post-Detection Framework for Enhanced Fire and Smoke Detection in Compact Deep Learning Models
- Authors: Aniruddha Srinivas Joshi, Godwyn James William, Shreyas Srinivas Joshi,
- Abstract summary: Existing vision-based methods face challenges in balancing efficiency and reliability.<n>Deep learning models such as YOLOv5n and YOLOv8n are widely adopted for deployment on UAVs, CCTV systems, and IoT devices.<n>We propose an uncertainty aware post-detection framework that rescales detection confidences using both statistical uncertainty and domain relevant visual cues.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Accurate fire and smoke detection is critical for safety and disaster response, yet existing vision-based methods face challenges in balancing efficiency and reliability. Compact deep learning models such as YOLOv5n and YOLOv8n are widely adopted for deployment on UAVs, CCTV systems, and IoT devices, but their reduced capacity often results in false positives and missed detections. Conventional post-detection methods such as Non-Maximum Suppression and Soft-NMS rely only on spatial overlap, which can suppress true positives or retain false alarms in cluttered or ambiguous fire scenes. To address these limitations, we propose an uncertainty aware post-detection framework that rescales detection confidences using both statistical uncertainty and domain relevant visual cues. A lightweight Confidence Refinement Network integrates uncertainty estimates with color, edge, and texture features to adjust detection scores without modifying the base model. Experiments on the D-Fire dataset demonstrate improved precision, recall, and mean average precision compared to existing baselines, with only modest computational overhead. These results highlight the effectiveness of post-detection rescoring in enhancing the robustness of compact deep learning models for real-world fire and smoke detection.
Related papers
- Learning to Explore: Policy-Guided Outlier Synthesis for Graph Out-of-Distribution Detection [51.93878677594561]
In unsupervised graph-level OOD detection, models are typically trained using only in-distribution (ID) data.<n>We propose a Policy-Guided Outlier Synthesis framework that replaces statics with a learned exploration strategy.
arXiv Detail & Related papers (2026-02-28T11:40:18Z) - Attack-Aware Deepfake Detection under Counter-Forensic Manipulations [0.30586855806896035]
This work presents an attack-aware deepfake and image-forensics detector designed for robustness, well-calibrated probabilities, and transparent evidence under realistic deployment conditions.<n>The method combines red-team training with randomized test-time defense in a two-stream architecture.<n>Results demonstrate near-perfect ranking across attacks, low calibration error, minimal abstention risk, and controlled tamper under regrain.
arXiv Detail & Related papers (2025-12-26T04:05:52Z) - Towards Trustworthy Wi-Fi Sensing: Systematic Evaluation of Deep Learning Model Robustness to Adversarial Attacks [4.5835414225547195]
We evaluate the robustness of CSI deep learning models under diverse threat models and varying degrees of attack realism.<n>Our experiments show that smaller models, while efficient and equally performant on clean data, are markedly less robust.<n>We confirm that physically realizable signal-space perturbations, designed to be feasible in real wireless channels, significantly reduce attack success.
arXiv Detail & Related papers (2025-11-25T16:24:29Z) - Diffuse to Detect: A Generalizable Framework for Anomaly Detection with Diffusion Models Applications to UAVs and Beyond [2.4449457537548036]
Anomaly detection in complex, high-dimensional data, such as UAV sensor readings, is essential for operational safety.<n>We propose the Diffuse to Detect (DTD) framework, a novel approach that adapts diffusion models for anomaly detection.<n>DTD employs a single-step diffusion process to predict noise patterns, enabling rapid and precise identification of anomalies without reconstruction errors.
arXiv Detail & Related papers (2025-10-27T02:08:08Z) - Towards Adversarial Robustness and Uncertainty Quantification in DINOv2-based Few-Shot Anomaly Detection [6.288045889067255]
Foundation models such as DINOv2 have shown strong performance in few-shot anomaly detection.<n>We present one of the first systematic studies of adversarial attacks and uncertainty estimation in this setting.<n>We find that raw anomaly scores are poorly calibrated, revealing a gap between confidence and correctness that limits safety-critical use.
arXiv Detail & Related papers (2025-10-15T15:06:45Z) - An Uncertainty-aware DETR Enhancement Framework for Object Detection [10.102900613370817]
We propose an uncertainty-aware enhancement framework for DETR-based object detectors.<n>We derive a Bayes Risk formulation to filter high-risk information and improve detection reliability.<n> Experiments on the COCO benchmark show that our method can be effectively integrated into existing DETR variants.
arXiv Detail & Related papers (2025-07-20T07:53:04Z) - Understanding and Improving Training-Free AI-Generated Image Detections with Vision Foundation Models [68.90917438865078]
Deepfake techniques for facial synthesis and editing pose serious risks for generative models.<n>In this paper, we investigate how detection performance varies across model backbones, types, and datasets.<n>We introduce Contrastive Blur, which enhances performance on facial images, and MINDER, which addresses noise type bias, balancing performance across domains.
arXiv Detail & Related papers (2024-11-28T13:04:45Z) - CCi-YOLOv8n: Enhanced Fire Detection with CARAFE and Context-Guided Modules [0.3749861135832073]
Fire incidents in urban and forested areas pose serious threats.<n>We present CCi-YOLOv8n, an enhanced YOLOv8 model with targeted improvements for detecting small fires and smoke.
arXiv Detail & Related papers (2024-11-17T09:31:04Z) - Open-Set Deepfake Detection: A Parameter-Efficient Adaptation Method with Forgery Style Mixture [81.93945602120453]
We introduce an approach that is both general and parameter-efficient for face forgery detection.<n>We design a forgery-style mixture formulation that augments the diversity of forgery source domains.<n>We show that the designed model achieves state-of-the-art generalizability with significantly reduced trainable parameters.
arXiv Detail & Related papers (2024-08-23T01:53:36Z) - Diffusion Denoising Process for Perceptron Bias in Out-of-distribution
Detection [67.49587673594276]
We introduce a new perceptron bias assumption that suggests discriminator models are more sensitive to certain features of the input, leading to the overconfidence problem.
We demonstrate that the diffusion denoising process (DDP) of DMs serves as a novel form of asymmetric, which is well-suited to enhance the input and mitigate the overconfidence problem.
Our experiments on CIFAR10, CIFAR100, and ImageNet show that our method outperforms SOTA approaches.
arXiv Detail & Related papers (2022-11-21T08:45:08Z) - ReDFeat: Recoupling Detection and Description for Multimodal Feature
Learning [51.07496081296863]
We recouple independent constraints of detection and description of multimodal feature learning with a mutual weighting strategy.
We propose a detector that possesses a large receptive field and is equipped with learnable non-maximum suppression layers.
We build a benchmark that contains cross visible, infrared, near-infrared and synthetic aperture radar image pairs for evaluating the performance of features in feature matching and image registration tasks.
arXiv Detail & Related papers (2022-05-16T04:24:22Z) - Uncertainty-Aware Deep Calibrated Salient Object Detection [74.58153220370527]
Existing deep neural network based salient object detection (SOD) methods mainly focus on pursuing high network accuracy.
These methods overlook the gap between network accuracy and prediction confidence, known as the confidence uncalibration problem.
We introduce an uncertaintyaware deep SOD network, and propose two strategies to prevent deep SOD networks from being overconfident.
arXiv Detail & Related papers (2020-12-10T23:28:36Z) - Robust Out-of-distribution Detection for Neural Networks [51.19164318924997]
We show that existing detection mechanisms can be extremely brittle when evaluating on in-distribution and OOD inputs.
We propose an effective algorithm called ALOE, which performs robust training by exposing the model to both adversarially crafted inlier and outlier examples.
arXiv Detail & Related papers (2020-03-21T17:46:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.