Efficient and Compact Convolutional Neural Network Architectures for
Non-temporal Real-time Fire Detection
- URL: http://arxiv.org/abs/2010.08833v1
- Date: Sat, 17 Oct 2020 17:48:04 GMT
- Title: Efficient and Compact Convolutional Neural Network Architectures for
Non-temporal Real-time Fire Detection
- Authors: William Thomson, Neelanjan Bhowmik, Toby P. Breckon
- Abstract summary: We investigate different Convolutional Neural Network (CNN) architectures and their variants for the non-temporal real-time detection bounds of fire pixel regions in video (or still) imagery.
Two reduced complexity compact CNN architectures (NasNet-A-OnFire and ShuffleNetV2-OnFire) are proposed through experimental analysis to optimise the computational efficiency for this task.
We notably achieve a classification speed up by a factor of 2.3x for binary classification and 1.3x for superpixel localisation, with runtime of 40 fps and 18 fps respectively.
- Score: 12.515216618616206
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Automatic visual fire detection is used to complement traditional fire
detection sensor systems (smoke/heat). In this work, we investigate different
Convolutional Neural Network (CNN) architectures and their variants for the
non-temporal real-time bounds detection of fire pixel regions in video (or
still) imagery. Two reduced complexity compact CNN architectures
(NasNet-A-OnFire and ShuffleNetV2-OnFire) are proposed through experimental
analysis to optimise the computational efficiency for this task. The results
improve upon the current state-of-the-art solution for fire detection,
achieving an accuracy of 95% for full-frame binary classification and 97% for
superpixel localisation. We notably achieve a classification speed up by a
factor of 2.3x for binary classification and 1.3x for superpixel localisation,
with runtime of 40 fps and 18 fps respectively, outperforming prior work in the
field presenting an efficient, robust and real-time solution for fire region
detection. Subsequent implementation on low-powered devices (Nvidia Xavier-NX,
achieving 49 fps for full-frame classification via ShuffleNetV2-OnFire)
demonstrates our architectures are suitable for various real-world deployment
applications.
Related papers
- SegNet: A Segmented Deep Learning based Convolutional Neural Network Approach for Drones Wildfire Detection [2.07180164747172]
This research addresses the pressing challenge of enhancing processing times and detection capabilities in Unmanned Aerial Vehicle (UAV)/drone imagery for global wildfire detection.
We focus on reducing feature maps to boost both time resolution and accuracy significantly advancing processing speeds and accuracy in real-time wildfire detection.
arXiv Detail & Related papers (2024-02-29T15:23:12Z) - Fire Detection From Image and Video Using YOLOv5 [0.0]
An improved YOLOv5 fire detection deep learning algorithm is proposed.
Fire-YOLOv5 attains excellent results compared to state-of-the-art object detection networks.
When the input image size is 416 x 416 resolution, the average detection time is 0.12 s per frame.
arXiv Detail & Related papers (2023-10-10T06:37:03Z) - Obscured Wildfire Flame Detection By Temporal Analysis of Smoke Patterns
Captured by Unmanned Aerial Systems [0.799536002595393]
This research paper addresses the challenge of detecting obscured wildfires in real-time using drones equipped only with RGB cameras.
We propose a novel methodology that employs semantic segmentation based on the temporal analysis of smoke patterns in video sequences.
arXiv Detail & Related papers (2023-06-30T19:45:43Z) - FSDNet-An efficient fire detection network for complex scenarios based
on YOLOv3 and DenseNet [8.695064779659031]
This paper proposes a fire detection network called FSDNet (Fire Smoke Detection Network), which consists of a feature extraction module, a fire classification module, and a fire detection module.
The accuracy of FSDNet on the two benchmark datasets is 99.82% and 91.15%, respectively, and the average precision on MS-FS is 86.80%, which is better than the mainstream fire detection methods.
arXiv Detail & Related papers (2023-04-15T15:46:08Z) - EAutoDet: Efficient Architecture Search for Object Detection [110.99532343155073]
EAutoDet framework can discover practical backbone and FPN architectures for object detection in 1.4 GPU-days.
We propose a kernel reusing technique by sharing the weights of candidate operations on one edge and consolidating them into one convolution.
In particular, the discovered architectures surpass state-of-the-art object detection NAS methods and achieve 40.1 mAP with 120 FPS and 49.2 mAP with 41.3 FPS on COCO test-dev set.
arXiv Detail & Related papers (2022-03-21T05:56:12Z) - Weakly-supervised fire segmentation by visualizing intermediate CNN
layers [82.75113406937194]
Fire localization in images and videos is an important step for an autonomous system to combat fire incidents.
We consider weakly supervised segmentation of fire in images, in which only image labels are used to train the network.
We show that in the case of fire segmentation, which is a binary segmentation problem, the mean value of features in a mid-layer of classification CNN can perform better than conventional Class Activation Mapping (CAM) method.
arXiv Detail & Related papers (2021-11-16T11:56:28Z) - NAS-FCOS: Efficient Search for Object Detection Architectures [113.47766862146389]
We propose an efficient method to obtain better object detectors by searching for the feature pyramid network (FPN) and the prediction head of a simple anchor-free object detector.
With carefully designed search space, search algorithms, and strategies for evaluating network quality, we are able to find top-performing detection architectures within 4 days using 8 V100 GPUs.
arXiv Detail & Related papers (2021-10-24T12:20:04Z) - PV-RCNN++: Point-Voxel Feature Set Abstraction With Local Vector
Representation for 3D Object Detection [100.60209139039472]
We propose the PointVoxel Region based Convolution Neural Networks (PVRCNNs) for accurate 3D detection from point clouds.
Our proposed PV-RCNNs significantly outperform previous state-of-the-art 3D detection methods on both the Open dataset and the highly-competitive KITTI benchmark.
arXiv Detail & Related papers (2021-01-31T14:51:49Z) - Anchor-free Small-scale Multispectral Pedestrian Detection [88.7497134369344]
We propose a method for effective and efficient multispectral fusion of the two modalities in an adapted single-stage anchor-free base architecture.
We aim at learning pedestrian representations based on object center and scale rather than direct bounding box predictions.
Results show our method's effectiveness in detecting small-scaled pedestrians.
arXiv Detail & Related papers (2020-08-19T13:13:01Z) - R-FCN: Object Detection via Region-based Fully Convolutional Networks [87.62557357527861]
We present region-based, fully convolutional networks for accurate and efficient object detection.
Our result is achieved at a test-time speed of 170ms per image, 2.5-20x faster than the Faster R-CNN counterpart.
arXiv Detail & Related papers (2016-05-20T15:50:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.