Real-Time Wildfire Localization on the NASA Autonomous Modular Sensor using Deep Learning
- URL: http://arxiv.org/abs/2601.14475v1
- Date: Tue, 20 Jan 2026 20:56:34 GMT
- Title: Real-Time Wildfire Localization on the NASA Autonomous Modular Sensor using Deep Learning
- Authors: Yajvan Ravan, Aref Malek, Chester Dolph, Nikhil Behari,
- Abstract summary: We introduce a human-annotated dataset from the NASA Autonomous Modular Sensor (AMS)<n>Our dataset combines spectral data from 12 different channels, including infrared (IR), short-wave IR (SWIR), and thermal.<n>We demonstrate results from a deep-learning model to automate the human-intensive process of fire perimeter determination.
- Score: 4.4145031895964415
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: High-altitude, multi-spectral, aerial imagery is scarce and expensive to acquire, yet it is necessary for algorithmic advances and application of machine learning models to high-impact problems such as wildfire detection. We introduce a human-annotated dataset from the NASA Autonomous Modular Sensor (AMS) using 12-channel, medium to high altitude (3 - 50 km) aerial wildfire images similar to those used in current US wildfire missions. Our dataset combines spectral data from 12 different channels, including infrared (IR), short-wave IR (SWIR), and thermal. We take imagery from 20 wildfire missions and randomly sample small patches to generate over 4000 images with high variability, including occlusions by smoke/clouds, easily-confused false positives, and nighttime imagery. We demonstrate results from a deep-learning model to automate the human-intensive process of fire perimeter determination. We train two deep neural networks, one for image classification and the other for pixel-level segmentation. The networks are combined into a unique real-time segmentation model to efficiently localize active wildfire on an incoming image feed. Our model achieves 96% classification accuracy, 74% Intersection-over-Union(IoU), and 84% recall surpassing past methods, including models trained on satellite data and classical color-rule algorithms. By leveraging a multi-spectral dataset, our model is able to detect active wildfire at nighttime and behind clouds, while distinguishing between false positives. We find that data from the SWIR, IR, and thermal bands is the most important to distinguish fire perimeters. Our code and dataset can be found here: https://github.com/nasa/Autonomous-Modular-Sensor-Wildfire-Segmentation/tree/main and https://drive.google.com/drive/folders/1-u4vs9rqwkwgdeeeoUhftCxrfe_4QPTn?=usp=drive_link
Related papers
- PyroFocus: A Deep Learning Approach to Real-Time Wildfire Detection in Multispectral Remote Sensing Imagery [0.0]
Rapid and accurate wildfire detection is crucial for emergency response and environmental management.<n>In airborne and spaceborne missions, real-time algorithms must distinguish between no fire, active fire, and post-fire conditions.<n>We introduce PyroFocus, a two-stage pipeline that performs fire classification followed by fire radiative power (FRP) regression or segmentation to reduce inference time and computational cost for onboard deployment.
arXiv Detail & Related papers (2025-12-02T21:59:45Z) - Adapting Vehicle Detectors for Aerial Imagery to Unseen Domains with Weak Supervision [46.87579355047397]
This paper proposes a novel method that uses generative AI to synthesize high-quality aerial images and their labels.<n>Our key contribution is the development of a multi-stage, multi-modal knowledge transfer framework.
arXiv Detail & Related papers (2025-07-28T16:38:06Z) - Zero-Shot Detection of AI-Generated Images [54.01282123570917]
We propose a zero-shot entropy-based detector (ZED) to detect AI-generated images.
Inspired by recent works on machine-generated text detection, our idea is to measure how surprising the image under analysis is compared to a model of real images.
ZED achieves an average improvement of more than 3% over the SoTA in terms of accuracy.
arXiv Detail & Related papers (2024-09-24T08:46:13Z) - Obscured Wildfire Flame Detection By Temporal Analysis of Smoke Patterns
Captured by Unmanned Aerial Systems [0.799536002595393]
This research paper addresses the challenge of detecting obscured wildfires in real-time using drones equipped only with RGB cameras.
We propose a novel methodology that employs semantic segmentation based on the temporal analysis of smoke patterns in video sequences.
arXiv Detail & Related papers (2023-06-30T19:45:43Z) - Wildfire Detection Via Transfer Learning: A Survey [2.766371147936368]
This paper surveys different publicly available neural network models used for detecting wildfires using regular visible-range cameras which are placed on hilltops or forest lookout towers.
The neural network models are pre-trained on ImageNet-1K and fine-tuned on a custom wildfire dataset.
arXiv Detail & Related papers (2023-06-21T13:57:04Z) - Analyzing Multispectral Satellite Imagery of South American Wildfires
Using CNNs and Unsupervised Learning [0.0]
This study trains a Fully Convolutional Neural Network with skip connections on Landsat 8 images of Ecuador and the Galapagos.
Image segmentation is conducted on the Cirrus Cloud band using K-Means Clustering to simplify continuous pixel values into three discrete classes.
Two additional Convolutional Neural Networks are trained to classify the presence of a wildfire in a patch of land.
arXiv Detail & Related papers (2022-01-19T02:45:01Z) - Infrared Small-Dim Target Detection with Transformer under Complex
Backgrounds [155.388487263872]
We propose a new infrared small-dim target detection method with the transformer.
We adopt the self-attention mechanism of the transformer to learn the interaction information of image features in a larger range.
We also design a feature enhancement module to learn more features of small-dim targets.
arXiv Detail & Related papers (2021-09-29T12:23:41Z) - SODA10M: Towards Large-Scale Object Detection Benchmark for Autonomous
Driving [94.11868795445798]
We release a Large-Scale Object Detection benchmark for Autonomous driving, named as SODA10M, containing 10 million unlabeled images and 20K images labeled with 6 representative object categories.
To improve diversity, the images are collected every ten seconds per frame within 32 different cities under different weather conditions, periods and location scenes.
We provide extensive experiments and deep analyses of existing supervised state-of-the-art detection models, popular self-supervised and semi-supervised approaches, and some insights about how to develop future models.
arXiv Detail & Related papers (2021-06-21T13:55:57Z) - Active Fire Detection in Landsat-8 Imagery: a Large-Scale Dataset and a
Deep-Learning Study [1.3764085113103217]
This paper introduces a new large-scale dataset for active fire detection using deep learning techniques.
We present a study on how different convolutional neural network architectures can be used to approximate handcrafted algorithms.
The proposed dataset, source codes and trained models are available on Github.
arXiv Detail & Related papers (2021-01-09T19:05:03Z) - DR-SPAAM: A Spatial-Attention and Auto-regressive Model for Person
Detection in 2D Range Data [81.06749792332641]
We propose a person detection network which uses an alternative strategy to combine scans obtained at different times.
DR-SPAAM keeps the intermediate features from the backbone network as a template and recurrently updates the template when a new scan becomes available.
On the DROW dataset, our method outperforms the existing state-of-the-art, while being approximately four times faster.
arXiv Detail & Related papers (2020-04-29T11:01:44Z) - Drone-based RGB-Infrared Cross-Modality Vehicle Detection via
Uncertainty-Aware Learning [59.19469551774703]
Drone-based vehicle detection aims at finding the vehicle locations and categories in an aerial image.
We construct a large-scale drone-based RGB-Infrared vehicle detection dataset, termed DroneVehicle.
Our DroneVehicle collects 28, 439 RGB-Infrared image pairs, covering urban roads, residential areas, parking lots, and other scenarios from day to night.
arXiv Detail & Related papers (2020-03-05T05:29:44Z) - Radioactive data: tracing through training [130.2266320167683]
We propose a new technique, emphradioactive data, that makes imperceptible changes to this dataset such that any model trained on it will bear an identifiable mark.
Given a trained model, our technique detects the use of radioactive data and provides a level of confidence (p-value)
Our method is robust to data augmentation and backdoority of deep network optimization.
arXiv Detail & Related papers (2020-02-03T18:41:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.