Aerial Imagery Pile burn detection using Deep Learning: the FLAME
dataset
- URL: http://arxiv.org/abs/2012.14036v1
- Date: Mon, 28 Dec 2020 00:00:41 GMT
- Title: Aerial Imagery Pile burn detection using Deep Learning: the FLAME
dataset
- Authors: Alireza Shamsoshoara, Fatemeh Afghah, Abolfazl Razi, Liming Zheng,
Peter Z Ful\'e, Erik Blasch
- Abstract summary: FLAME (Fire Luminosity Airborne-based Machine learning Evaluation) offers a dataset of aerial images of fires.
This paper provides a fire image dataset collected by drones during a prescribed burning piled detritus in an Arizona pine forest.
The paper also highlights solutions to two machine learning problems: Binary classification of video frames based on the presence [and absence] of fire flames.
- Score: 9.619617596045911
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Wildfires are one of the costliest and deadliest natural disasters in the US,
causing damage to millions of hectares of forest resources and threatening the
lives of people and animals. Of particular importance are risks to firefighters
and operational forces, which highlights the need for leveraging technology to
minimize danger to people and property. FLAME (Fire Luminosity Airborne-based
Machine learning Evaluation) offers a dataset of aerial images of fires along
with methods for fire detection and segmentation which can help firefighters
and researchers to develop optimal fire management strategies. This paper
provides a fire image dataset collected by drones during a prescribed burning
piled detritus in an Arizona pine forest. The dataset includes video recordings
and thermal heatmaps captured by infrared cameras. The captured videos and
images are annotated and labeled frame-wise to help researchers easily apply
their fire detection and modeling algorithms. The paper also highlights
solutions to two machine learning problems: (1) Binary classification of video
frames based on the presence [and absence] of fire flames. An Artificial Neural
Network (ANN) method is developed that achieved a 76% classification accuracy.
(2) Fire detection using segmentation methods to precisely determine fire
borders. A deep learning method is designed based on the U-Net up-sampling and
down-sampling approach to extract a fire mask from the video frames. Our FLAME
method approached a precision of 92% and a recall of 84%. Future research will
expand the technique for free burning broadcast fire using thermal images.
Related papers
- Obscured Wildfire Flame Detection By Temporal Analysis of Smoke Patterns
Captured by Unmanned Aerial Systems [0.799536002595393]
This research paper addresses the challenge of detecting obscured wildfires in real-time using drones equipped only with RGB cameras.
We propose a novel methodology that employs semantic segmentation based on the temporal analysis of smoke patterns in video sequences.
arXiv Detail & Related papers (2023-06-30T19:45:43Z) - Blind Video Deflickering by Neural Filtering with a Flawed Atlas [90.96203200658667]
We propose a general flicker removal framework that only receives a single flickering video as input without additional guidance.
The core of our approach is utilizing the neural atlas in cooperation with a neural filtering strategy.
To validate our method, we construct a dataset that contains diverse real-world flickering videos.
arXiv Detail & Related papers (2023-03-14T17:52:29Z) - FireRisk: A Remote Sensing Dataset for Fire Risk Assessment with
Benchmarks Using Supervised and Self-supervised Learning [1.6596490382976503]
We propose a novel remote sensing dataset, FireRisk, consisting of 7 fire risk classes with a total of 91872 images for fire risk assessment.
On FireRisk, we present benchmark supervised and self-supervised representations, with Masked Autoencoders (MAE) pre-trained on ImageNet1k achieving the highest classification accuracy, 65.29%.
arXiv Detail & Related papers (2023-03-13T11:54:16Z) - Image-Based Fire Detection in Industrial Environments with YOLOv4 [53.180678723280145]
This work looks into the potential of AI to detect and recognize fires and reduce detection time using object detection on an image stream.
To our end, we collected and labeled appropriate data from several public sources, which have been used to train and evaluate several models based on the popular YOLOv4 object detector.
arXiv Detail & Related papers (2022-12-09T11:32:36Z) - FIgLib & SmokeyNet: Dataset and Deep Learning Model for Real-Time
Wildland Fire Smoke Detection [0.0]
Fire Ignition Library (FIgLib) is a publicly-available dataset of nearly 25,000 labeled wildfire smoke images.
SmokeyNet is a novel deep learning architecture usingtemporal information from camera imagery for real-time wildfire smoke detection.
When trained on the FIgLib dataset, SmokeyNet outperforms comparable baselines and rivals human performance.
arXiv Detail & Related papers (2021-12-16T03:49:58Z) - Attention on Classification for Fire Segmentation [82.75113406937194]
We propose a Convolutional Neural Network (CNN) for joint classification and segmentation of fire in images.
We use a spatial self-attention mechanism to capture long-range dependency between pixels, and a new channel attention module which uses the classification probability as an attention weight.
arXiv Detail & Related papers (2021-11-04T19:52:49Z) - Detecting Damage Building Using Real-time Crowdsourced Images and
Transfer Learning [53.26496452886417]
This paper presents an automated way to extract the damaged building images after earthquakes from social media platforms such as Twitter.
Using transfer learning and 6500 manually labelled images, we trained a deep learning model to recognize images with damaged buildings in the scene.
The trained model achieved good performance when tested on newly acquired images of earthquakes at different locations and ran in near real-time on Twitter feed after the 2020 M7.0 earthquake in Turkey.
arXiv Detail & Related papers (2021-10-12T06:31:54Z) - Lighting the Darkness in the Deep Learning Era [118.35081853500411]
Low-light image enhancement (LLIE) aims at improving the perception or interpretability of an image captured in an environment with poor illumination.
Recent advances in this area are dominated by deep learning-based solutions.
We provide a comprehensive survey to cover various aspects ranging from algorithm taxonomy to unsolved open issues.
arXiv Detail & Related papers (2021-04-21T19:12:19Z) - Fire Threat Detection From Videos with Q-Rough Sets [0.0]
Fire in control serves a number of purposes to human civilization, but it could simultaneously be a threat once its spread becomes uncontrolled.
Here we focus on developing an unsupervised method with which the threat of fire can be quantified.
All theories and indices defined here have been experimentally validated with different types of fire videos.
arXiv Detail & Related papers (2021-01-21T06:29:36Z) - Active Fire Detection in Landsat-8 Imagery: a Large-Scale Dataset and a
Deep-Learning Study [1.3764085113103217]
This paper introduces a new large-scale dataset for active fire detection using deep learning techniques.
We present a study on how different convolutional neural network architectures can be used to approximate handcrafted algorithms.
The proposed dataset, source codes and trained models are available on Github.
arXiv Detail & Related papers (2021-01-09T19:05:03Z) - Exploring Thermal Images for Object Detection in Underexposure Regions
for Autonomous Driving [67.69430435482127]
Underexposure regions are vital to construct a complete perception of the surroundings for safe autonomous driving.
The availability of thermal cameras has provided an essential alternate to explore regions where other optical sensors lack in capturing interpretable signals.
This work proposes a domain adaptation framework which employs a style transfer technique for transfer learning from visible spectrum images to thermal images.
arXiv Detail & Related papers (2020-06-01T09:59:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.