WIT-UAS: A Wildland-fire Infrared Thermal Dataset to Detect Crew Assets
From Aerial Views
- URL: http://arxiv.org/abs/2312.09159v1
- Date: Thu, 14 Dec 2023 17:29:26 GMT
- Title: WIT-UAS: A Wildland-fire Infrared Thermal Dataset to Detect Crew Assets
From Aerial Views
- Authors: Andrew Jong, Mukai Yu, Devansh Dhrafani, Siva Kailas, Brady Moon,
Katia Sycara, Sebastian Scherer
- Abstract summary: We present the Wildland-fire Infrared Thermal (WIT-UAS) dataset for long-wave infrared sensing of crew and vehicle assets amidst prescribed wildland fire environments.
WIT-UAS-ROS consists of full ROS bag files containing sensor and robot data of UAS flight over the fire, and WIT-UAS-Image contains hand-labeled long-wave infrared (LWIR) images extracted from WIT-UAS-ROS.
Our dataset is the first to focus on asset detection in a wildland fire environment.
- Score: 0.8741284539870512
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present the Wildland-fire Infrared Thermal (WIT-UAS) dataset for long-wave
infrared sensing of crew and vehicle assets amidst prescribed wildland fire
environments. While such a dataset is crucial for safety monitoring in wildland
fire applications, to the authors' awareness, no such dataset focusing on
assets near fire is publicly available. Presumably, this is due to the barrier
to entry of collaborating with fire management personnel. We present two
related data subsets: WIT-UAS-ROS consists of full ROS bag files containing
sensor and robot data of UAS flight over the fire, and WIT-UAS-Image contains
hand-labeled long-wave infrared (LWIR) images extracted from WIT-UAS-ROS. Our
dataset is the first to focus on asset detection in a wildland fire
environment. We show that thermal detection models trained without fire data
frequently detect false positives by classifying fire as people. By adding our
dataset to training, we show that the false positive rate is reduced
significantly. Yet asset detection in wildland fire environments is still
significantly more challenging than detection in urban environments, due to
dense obscuring trees, greater heat variation, and overbearing thermal signal
of the fire. We publicize this dataset to encourage the community to study more
advanced models to tackle this challenging environment. The dataset, code and
pretrained models are available at
\url{https://github.com/castacks/WIT-UAS-Dataset}.
Related papers
- IRSAM: Advancing Segment Anything Model for Infrared Small Target Detection [55.554484379021524]
Infrared Small Target Detection (IRSTD) task falls short in achieving satisfying performance due to a notable domain gap between natural and infrared images.
We propose the IRSAM model for IRSTD, which improves SAM's encoder-decoder architecture to learn better feature representation of infrared small objects.
arXiv Detail & Related papers (2024-07-10T10:17:57Z) - Rapid Wildfire Hotspot Detection Using Self-Supervised Learning on Temporal Remote Sensing Data [0.12289361708127873]
Leveraging remote sensed data from satellite networks and advanced AI models to automatically detect hotspots is an effective way to build wildfire monitoring systems.
We propose a novel dataset containing time series of remotely sensed data related to European fire events and a Self-Supervised Learning (SSL)-based model able to analyse multi-temporal data and identify hotspots in potentially near real time.
We train and evaluate the performance of our model using our dataset and Thraws, a dataset of thermal anomalies including several fire events, obtaining an F1 score of 63.58.
arXiv Detail & Related papers (2024-05-30T14:31:46Z) - A Multimodal Supervised Machine Learning Approach for Satellite-based
Wildfire Identification in Europe [0.34410212782758043]
We propose a wildfire identification solution to improve the accuracy of automated satellite-based hotspot detection systems.
We cross-reference the thermal anomalies detected by the Moderate-resolution Imaging Spectroradiometer (MODIS) and the Visible Infrared Imaging Radiometer Suite (VIIRS) hotspot services.
Then, we propose a novel supervised machine learning approach to disambiguate hotspot detections, distinguishing between wildfires and other events.
arXiv Detail & Related papers (2023-07-27T08:28:57Z) - Wildfire Detection Via Transfer Learning: A Survey [2.766371147936368]
This paper surveys different publicly available neural network models used for detecting wildfires using regular visible-range cameras which are placed on hilltops or forest lookout towers.
The neural network models are pre-trained on ImageNet-1K and fine-tuned on a custom wildfire dataset.
arXiv Detail & Related papers (2023-06-21T13:57:04Z) - FireRisk: A Remote Sensing Dataset for Fire Risk Assessment with
Benchmarks Using Supervised and Self-supervised Learning [1.6596490382976503]
We propose a novel remote sensing dataset, FireRisk, consisting of 7 fire risk classes with a total of 91872 images for fire risk assessment.
On FireRisk, we present benchmark supervised and self-supervised representations, with Masked Autoencoders (MAE) pre-trained on ImageNet1k achieving the highest classification accuracy, 65.29%.
arXiv Detail & Related papers (2023-03-13T11:54:16Z) - Multimodal Wildland Fire Smoke Detection [5.15911752972989]
Research has shown that climate change creates warmer temperatures and drier conditions, leading to longer wildfire seasons and increased wildfire risks in the U.S.
We present our work on integrating multiple data sources in SmokeyNet, a deep learning model usingtemporal information to detect smoke from wildland fires.
With a time-to-detection of only a few minutes, SmokeyNet can serve as an automated early notification system, providing a useful tool in the fight against destructive wildfires.
arXiv Detail & Related papers (2022-12-29T01:16:06Z) - Image-Based Fire Detection in Industrial Environments with YOLOv4 [53.180678723280145]
This work looks into the potential of AI to detect and recognize fires and reduce detection time using object detection on an image stream.
To our end, we collected and labeled appropriate data from several public sources, which have been used to train and evaluate several models based on the popular YOLOv4 object detector.
arXiv Detail & Related papers (2022-12-09T11:32:36Z) - A Multi-purpose Real Haze Benchmark with Quantifiable Haze Levels and
Ground Truth [61.90504318229845]
This paper introduces the first paired real image benchmark dataset with hazy and haze-free images, and in-situ haze density measurements.
This dataset was produced in a controlled environment with professional smoke generating machines that covered the entire scene.
A subset of this dataset has been used for the Object Detection in Haze Track of CVPR UG2 2022 challenge.
arXiv Detail & Related papers (2022-06-13T19:14:06Z) - Meta-UDA: Unsupervised Domain Adaptive Thermal Object Detection using
Meta-Learning [64.92447072894055]
Infrared (IR) cameras are robust under adverse illumination and lighting conditions.
We propose an algorithm meta-learning framework to improve existing UDA methods.
We produce a state-of-the-art thermal detector for the KAIST and DSIAC datasets.
arXiv Detail & Related papers (2021-10-07T02:28:18Z) - Speak2Label: Using Domain Knowledge for Creating a Large Scale Driver
Gaze Zone Estimation Dataset [55.391532084304494]
Driver Gaze in the Wild dataset contains 586 recordings, captured during different times of the day including evenings.
Driver Gaze in the Wild dataset contains 338 subjects with an age range of 18-63 years.
arXiv Detail & Related papers (2020-04-13T14:47:34Z) - Drone-based RGB-Infrared Cross-Modality Vehicle Detection via
Uncertainty-Aware Learning [59.19469551774703]
Drone-based vehicle detection aims at finding the vehicle locations and categories in an aerial image.
We construct a large-scale drone-based RGB-Infrared vehicle detection dataset, termed DroneVehicle.
Our DroneVehicle collects 28, 439 RGB-Infrared image pairs, covering urban roads, residential areas, parking lots, and other scenarios from day to night.
arXiv Detail & Related papers (2020-03-05T05:29:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.