Light-YOLOv8-Flame: A Lightweight High-Performance Flame Detection Algorithm
- URL: http://arxiv.org/abs/2504.08389v2
- Date: Tue, 15 Apr 2025 07:44:57 GMT
- Title: Light-YOLOv8-Flame: A Lightweight High-Performance Flame Detection Algorithm
- Authors: Jiawei Lan, Ye Tao, Zhibiao Wang, Haoyang Yu, Wenhua Cui,
- Abstract summary: This paper introduces Light-YOLOv8-Flame, a lightweight flame detection algorithm specifically designed for real-time deployment.<n>The proposed model enhances the YOLOv8 architecture through the substitution of the original C2f module with the FasterNet Block module.
- Score: 7.749651062075137
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Fire detection algorithms, particularly those based on computer vision, encounter significant challenges such as high computational costs and delayed response times, which hinder their application in real-time systems. To address these limitations, this paper introduces Light-YOLOv8-Flame, a lightweight flame detection algorithm specifically designed for fast and efficient real-time deployment. The proposed model enhances the YOLOv8 architecture through the substitution of the original C2f module with the FasterNet Block module. This new block combines Partial Convolution (PConv) and Convolution (Conv) layers, reducing both computational complexity and model size. A dataset comprising 7,431 images, representing both flame and non-flame scenarios, was collected and augmented for training purposes. Experimental findings indicate that the modified YOLOv8 model achieves a 0.78% gain in mean average precision (mAP) and a 2.05% boost in recall, while reducing the parameter count by 25.34%, with only a marginal decrease in precision by 0.82%. These findings highlight that Light-YOLOv8-Flame offers enhanced detection performance and speed, making it well-suited for real-time fire detection on resource-constrained devices.
Related papers
- HGO-YOLO: Advancing Anomaly Behavior Detection with Hierarchical Features and Lightweight Optimized Detection [0.0]
This study proposes a model called HGO-YOLO, which integrates the HGNetv2 architecture into YOLOv8.<n> Evaluation results show that the proposed algorithm achieves a mAP@0.5 of 87.4% and a recall rate of 81.1%, with a model size of only 4.6 MB and a frame rate of 56 FPS on the CPU.
arXiv Detail & Related papers (2025-03-10T14:29:12Z) - RS-vHeat: Heat Conduction Guided Efficient Remote Sensing Foundation Model [59.37279559684668]
We introduce RS-vHeat, an efficient multi-modal remote sensing foundation model.<n>Specifically, RS-vHeat applies the Heat Conduction Operator (HCO) with a complexity of $O(N1.5)$ and a global receptive field.<n>Compared to attention-based remote sensing foundation models, we reduce memory usage by 84%, FLOPs by 24% and improves throughput by 2.7 times.
arXiv Detail & Related papers (2024-11-27T01:43:38Z) - Efficient Diffusion as Low Light Enhancer [63.789138528062225]
Reflectance-Aware Trajectory Refinement (RATR) is a simple yet effective module to refine the teacher trajectory using the reflectance component of images.
textbfReflectance-aware textbfDiffusion with textbfDistilled textbfTrajectory (textbfReDDiT) is an efficient and flexible distillation framework tailored for Low-Light Image Enhancement (LLIE)
arXiv Detail & Related papers (2024-10-16T08:07:18Z) - EFA-YOLO: An Efficient Feature Attention Model for Fire and Flame Detection [3.334973867478745]
We propose two key modules: EAConv (Efficient Attention Convolution) and EADown (Efficient Attention Downsampling)
Based on these two modules, we design an efficient and lightweight flame detection model, EFA-YOLO (Efficient Feature Attention YOLO)
EFA-YOLO exhibits a significant enhancement in detection accuracy (mAP) and inference speed, with model parameter amount is reduced by 94.6 and the inference speed is improved by 88 times.
arXiv Detail & Related papers (2024-09-19T10:20:07Z) - Fast vehicle detection algorithm based on lightweight YOLO7-tiny [7.7600847187608135]
This paper proposes a lightweight vehicle detection algorithm based on YOLOv7-tiny (You Only Look Once version seven) called Ghost-YOLOv7.
The width of model is scaled to 0.5 and the standard convolution of the backbone network is replaced with Ghost convolution to achieve a lighter network and improve the detection speed.
A Ghost Decouoled Head (GDH) is employed for accurate prediction of vehicle location and species.
arXiv Detail & Related papers (2023-04-12T17:28:30Z) - High-Throughput, High-Performance Deep Learning-Driven Light Guide Plate
Surface Visual Quality Inspection Tailored for Real-World Manufacturing
Environments [75.66288398180525]
Light guide plates are essential optical components widely used in a diverse range of applications ranging from medical lighting fixtures to back-lit TV displays.
In this work, we introduce a fully-integrated, high-performance deep learning-driven workflow for light guide plate surface visual quality inspection (VQI) tailored for real-world manufacturing environments.
To enable automated VQI on the edge computing within the fully-integrated VQI system, a highly compact deep anti-aliased attention condenser neural network (which we name LightDefectNet) was created.
Experiments show that LightDetectNet achieves a detection accuracy
arXiv Detail & Related papers (2022-12-20T20:11:11Z) - Light-YOLOv5: A Lightweight Algorithm for Improved YOLOv5 in Complex
Fire Scenarios [8.721557548002737]
This paper proposes a lightweight fire detection algorithm of Light-YOLOv5 that achieves a balance of speed and accuracy.
Experiments show that Light-YOLOv5 improves mAP by 3.3% compared to the original algorithm, reduces the number of parameters by 27.1%, decreases the computation by 19.1%, achieves FPS of 91.1.
arXiv Detail & Related papers (2022-08-29T08:36:04Z) - StreamYOLO: Real-time Object Detection for Streaming Perception [84.2559631820007]
We endow the models with the capacity of predicting the future, significantly improving the results for streaming perception.
We consider multiple velocities driving scene and propose Velocity-awared streaming AP (VsAP) to jointly evaluate the accuracy.
Our simple method achieves the state-of-the-art performance on Argoverse-HD dataset and improves the sAP and VsAP by 4.7% and 8.2% respectively.
arXiv Detail & Related papers (2022-07-21T12:03:02Z) - FastFlowNet: A Lightweight Network for Fast Optical Flow Estimation [81.76975488010213]
Dense optical flow estimation plays a key role in many robotic vision tasks.
Current networks often occupy large number of parameters and require heavy computation costs.
Our proposed FastFlowNet works in the well-known coarse-to-fine manner with following innovations.
arXiv Detail & Related papers (2021-03-08T03:09:37Z) - FRDet: Balanced and Lightweight Object Detector based on Fire-Residual
Modules for Embedded Processor of Autonomous Driving [0.0]
We propose a lightweight one-stage object detector that is balanced to satisfy all the constraints of accuracy, model size, and real-time processing.
Our network aims to maximize the compression of the model while achieving or surpassing YOLOv3 level of accuracy.
arXiv Detail & Related papers (2020-11-16T16:15:43Z) - Efficient and Compact Convolutional Neural Network Architectures for
Non-temporal Real-time Fire Detection [12.515216618616206]
We investigate different Convolutional Neural Network (CNN) architectures and their variants for the non-temporal real-time detection bounds of fire pixel regions in video (or still) imagery.
Two reduced complexity compact CNN architectures (NasNet-A-OnFire and ShuffleNetV2-OnFire) are proposed through experimental analysis to optimise the computational efficiency for this task.
We notably achieve a classification speed up by a factor of 2.3x for binary classification and 1.3x for superpixel localisation, with runtime of 40 fps and 18 fps respectively.
arXiv Detail & Related papers (2020-10-17T17:48:04Z) - Highly Efficient Salient Object Detection with 100K Parameters [137.74898755102387]
We propose a flexible convolutional module, namely generalized OctConv (gOctConv), to efficiently utilize both in-stage and cross-stages multi-scale features.
We build an extremely light-weighted model, namely CSNet, which achieves comparable performance with about 0.2% (100k) of large models on popular object detection benchmarks.
arXiv Detail & Related papers (2020-03-12T07:00:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.