KutralNet: A Portable Deep Learning Model for Fire Recognition
- URL: http://arxiv.org/abs/2008.06866v1
- Date: Sun, 16 Aug 2020 09:35:25 GMT
- Title: KutralNet: A Portable Deep Learning Model for Fire Recognition
- Authors: Angel Ayala, Bruno Fernandes, Francisco Cruz, David Mac\^edo, Adriano
L. I. Oliveira, and Cleber Zanchettin
- Abstract summary: We propose a new deep learning architecture that requires fewer floating-point operations (flops) for fire recognition.
We also propose a portable approach for fire recognition and the use of modern techniques to reduce the model's computational cost.
One of our models presents 71% fewer parameters than FireNet, while still presenting competitive accuracy and AUROC performance.
- Score: 4.886882441164088
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Most of the automatic fire alarm systems detect the fire presence through
sensors like thermal, smoke, or flame. One of the new approaches to the problem
is the use of images to perform the detection. The image approach is promising
since it does not need specific sensors and can be easily embedded in different
devices. However, besides the high performance, the computational cost of the
used deep learning methods is a challenge to their deployment in portable
devices. In this work, we propose a new deep learning architecture that
requires fewer floating-point operations (flops) for fire recognition.
Additionally, we propose a portable approach for fire recognition and the use
of modern techniques such as inverted residual block, convolutions like
depth-wise, and octave, to reduce the model's computational cost. The
experiments show that our model keeps high accuracy while substantially
reducing the number of parameters and flops. One of our models presents 71\%
fewer parameters than FireNet, while still presenting competitive accuracy and
AUROC performance. The proposed methods are evaluated on FireNet and FiSmo
datasets. The obtained results are promising for the implementation of the
model in a mobile device, considering the reduced number of flops and
parameters acquired.
Related papers
- Zero-Shot Detection of AI-Generated Images [54.01282123570917]
We propose a zero-shot entropy-based detector (ZED) to detect AI-generated images.
Inspired by recent works on machine-generated text detection, our idea is to measure how surprising the image under analysis is compared to a model of real images.
ZED achieves an average improvement of more than 3% over the SoTA in terms of accuracy.
arXiv Detail & Related papers (2024-09-24T08:46:13Z) - Learning to Make Keypoints Sub-Pixel Accurate [80.55676599677824]
This work addresses the challenge of sub-pixel accuracy in detecting 2D local features.
We propose a novel network that enhances any detector with sub-pixel precision by learning an offset vector for detected features.
arXiv Detail & Related papers (2024-07-16T12:39:56Z) - SegNet: A Segmented Deep Learning based Convolutional Neural Network Approach for Drones Wildfire Detection [2.07180164747172]
This research addresses the pressing challenge of enhancing processing times and detection capabilities in Unmanned Aerial Vehicle (UAV)/drone imagery for global wildfire detection.
We focus on reducing feature maps to boost both time resolution and accuracy significantly advancing processing speeds and accuracy in real-time wildfire detection.
arXiv Detail & Related papers (2024-02-29T15:23:12Z) - SpirDet: Towards Efficient, Accurate and Lightweight Infrared Small
Target Detector [60.42293239557962]
We propose SpirDet, a novel approach for efficient detection of infrared small targets.
We employ a new dual-branch sparse decoder to restore the feature map.
Extensive experiments show that the proposed SpirDet significantly outperforms state-of-the-art models.
arXiv Detail & Related papers (2024-02-08T05:06:14Z) - Fire Detection From Image and Video Using YOLOv5 [0.0]
An improved YOLOv5 fire detection deep learning algorithm is proposed.
Fire-YOLOv5 attains excellent results compared to state-of-the-art object detection networks.
When the input image size is 416 x 416 resolution, the average detection time is 0.12 s per frame.
arXiv Detail & Related papers (2023-10-10T06:37:03Z) - An FPGA smart camera implementation of segmentation models for drone
wildfire imagery [0.9837190842240352]
Wildfires represent one of the most relevant natural disasters worldwide, due to their impact on various societal and environmental levels.
One of the most promising approaches for wildfire fighting is the use of drones equipped with visible and infrared cameras for the detection, monitoring, and fire spread assessment in a remote manner but in close proximity to the affected areas.
In this work, we posit that smart cameras based on low-power consumption field-programmable gate arrays (FPGAs) and binarized neural networks (BNNs) represent a cost-effective alternative for implementing onboard computing on the edge.
arXiv Detail & Related papers (2023-09-04T02:30:14Z) - Obscured Wildfire Flame Detection By Temporal Analysis of Smoke Patterns
Captured by Unmanned Aerial Systems [0.799536002595393]
This research paper addresses the challenge of detecting obscured wildfires in real-time using drones equipped only with RGB cameras.
We propose a novel methodology that employs semantic segmentation based on the temporal analysis of smoke patterns in video sequences.
arXiv Detail & Related papers (2023-06-30T19:45:43Z) - Incremental Online Learning Algorithms Comparison for Gesture and Visual
Smart Sensors [68.8204255655161]
This paper compares four state-of-the-art algorithms in two real applications: gesture recognition based on accelerometer data and image classification.
Our results confirm these systems' reliability and the feasibility of deploying them in tiny-memory MCUs.
arXiv Detail & Related papers (2022-09-01T17:05:20Z) - Illumination and Temperature-Aware Multispectral Networks for
Edge-Computing-Enabled Pedestrian Detection [10.454696553567809]
This study proposes a lightweight Illumination and Temperature-aware Multispectral Network (IT-MN) for accurate and efficient pedestrian detection.
The proposed algorithm is evaluated by comparing with the selected state-of-the-art algorithms using a public dataset collected by in-vehicle cameras.
The results show that the proposed algorithm achieves a low miss rate and inference time at 14.19% and 0.03 seconds per image pair on GPU.
arXiv Detail & Related papers (2021-12-09T17:27:23Z) - FastFlowNet: A Lightweight Network for Fast Optical Flow Estimation [81.76975488010213]
Dense optical flow estimation plays a key role in many robotic vision tasks.
Current networks often occupy large number of parameters and require heavy computation costs.
Our proposed FastFlowNet works in the well-known coarse-to-fine manner with following innovations.
arXiv Detail & Related papers (2021-03-08T03:09:37Z) - Probing Model Signal-Awareness via Prediction-Preserving Input
Minimization [67.62847721118142]
We evaluate models' ability to capture the correct vulnerability signals to produce their predictions.
We measure the signal awareness of models using a new metric we propose- Signal-aware Recall (SAR)
The results show a sharp drop in the model's Recall from the high 90s to sub-60s with the new metric.
arXiv Detail & Related papers (2020-11-25T20:05:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.