Constructing a Real-World Benchmark for Early Wildfire Detection with the New PYRONEAR-2025 Dataset
- URL: http://arxiv.org/abs/2402.05349v3
- Date: Tue, 14 Oct 2025 17:40:02 GMT
- Title: Constructing a Real-World Benchmark for Early Wildfire Detection with the New PYRONEAR-2025 Dataset
- Authors: Mateo Lostanlen, Nicolas Isla, Jose Guillen, Renzo Zanca, Felix Veith, Cristian Buc, Valentin Barriere,
- Abstract summary: PYRONEAR-2025 is a new dataset composed of both images and videos, allowing for the training and evaluation of smoke plume detection models.<n>The data is sourced from: (i) web-scraped videos of wildfires from public networks of cameras for wildfire detection in-the-wild, (ii) videos from our in-house network of cameras, and (iii) a small portion of synthetic and real images.<n>This dataset includes around 150,000 manual annotations on 50,000 images, covering 640 wildfires.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Early wildfire detection (EWD) is of the utmost importance to enable rapid response efforts, and thus minimize the negative impacts of wildfire spreads. To this end, we present PYRONEAR-2025, a new dataset composed of both images and videos, allowing for the training and evaluation of smoke plume detection models, including sequential models. The data is sourced from: (i) web-scraped videos of wildfires from public networks of cameras for wildfire detection in-the-wild, (ii) videos from our in-house network of cameras, and (iii) a small portion of synthetic and real images. This dataset includes around 150,000 manual annotations on 50,000 images, covering 640 wildfires, PYRONEAR-2025 surpasses existing datasets in size and diversity. It includes data from France, Spain, Chile and the United States. Finally, it is composed of both images and videos, allowing for the training and evaluation of smoke plume detection models, including sequential models. We ran cross-dataset experiments using a lightweight state-of-the-art object detection model, as the ones used in-real-life, and found out the proposed dataset is particularly challenging, with F1 score of around 70\%, but more stable than existing datasets. Finally, its use in concordance with other public datasets helps to reach higher results overall. Last but not least, the video part of the dataset can be used to train a lightweight sequential model, improving global recall while maintaining precision for earlier detections. [We make both our code and data available online](https://github.com/joseg20/wildfires2025).
Related papers
- Real-Time Wildfire Localization on the NASA Autonomous Modular Sensor using Deep Learning [4.4145031895964415]
We introduce a human-annotated dataset from the NASA Autonomous Modular Sensor (AMS)<n>Our dataset combines spectral data from 12 different channels, including infrared (IR), short-wave IR (SWIR), and thermal.<n>We demonstrate results from a deep-learning model to automate the human-intensive process of fire perimeter determination.
arXiv Detail & Related papers (2026-01-20T20:56:34Z) - Detection Fire in Camera RGB-NIR [0.0]
This report presents an additional NIR dataset, a two-stage detection model, and Patched-YOLO.<n>To improve night-time fire detection accuracy while reducing false positives caused by artificial lights, we propose a two-stage pipeline combining YOLOv11 and EfficientNetV2-B0.<n>Finally, to improve fire detection in RGB images, especially for small and distant objects, we introduce Patched-YOLO.
arXiv Detail & Related papers (2025-12-29T16:48:24Z) - Exploring State-of-the-art models for Early Detection of Forest Fires [0.8127745323109788]
We propose a dataset for early identification of forest fires through visual analysis.<n>We obtained this dataset synthetically by utilising game simulators such as Red Dead Redemption 2.<n>We compared image classification and localisation methods on the proposed dataset.
arXiv Detail & Related papers (2025-11-25T09:13:07Z) - Wildfire Detection Using Vision Transformer with the Wildfire Dataset [0.6229567287607896]
In 2023, wildfires caused 130 deaths nationwide, the highest since 1990.<n>Deep learning models, such as Vision Transformers (ViTs), can enhance early detection by processing complex image data with high accuracy.<n>However, wildfire detection faces challenges, including the availability of high-quality, real-time data.
arXiv Detail & Related papers (2025-05-23T02:08:28Z) - BVI-RLV: A Fully Registered Dataset and Benchmarks for Low-Light Video Enhancement [56.97766265018334]
This paper introduces a low-light video dataset, consisting of 40 scenes with various motion scenarios under two distinct low-lighting conditions.
We provide fully registered ground truth data captured in normal light using a programmable motorized dolly and refine it via an image-based approach for pixel-wise frame alignment across different light levels.
Our experimental results demonstrate the significance of fully registered video pairs for low-light video enhancement (LLVE) and the comprehensive evaluation shows that the models trained with our dataset outperform those trained with the existing datasets.
arXiv Detail & Related papers (2024-07-03T22:41:49Z) - XLD: A Cross-Lane Dataset for Benchmarking Novel Driving View Synthesis [84.23233209017192]
This paper presents a novel driving view synthesis dataset and benchmark specifically designed for autonomous driving simulations.
The dataset is unique as it includes testing images captured by deviating from the training trajectory by 1-4 meters.
We establish the first realistic benchmark for evaluating existing NVS approaches under front-only and multi-camera settings.
arXiv Detail & Related papers (2024-06-26T14:00:21Z) - SARDet-100K: Towards Open-Source Benchmark and ToolKit for Large-Scale SAR Object Detection [79.23689506129733]
We establish a new benchmark dataset and an open-source method for large-scale SAR object detection.<n>Our dataset, SARDet-100K, is a result of intense surveying, collecting, and standardizing 10 existing SAR detection datasets.<n>To the best of our knowledge, SARDet-100K is the first COCO-level large-scale multi-class SAR object detection dataset ever created.
arXiv Detail & Related papers (2024-03-11T09:20:40Z) - Towards Viewpoint Robustness in Bird's Eye View Segmentation [85.99907496019972]
We study how AV perception models are affected by changes in camera viewpoint.
Small changes to pitch, yaw, depth, or height of the camera at inference time lead to large drops in performance.
We introduce a technique for novel view synthesis and use it to transform collected data to the viewpoint of target rigs.
arXiv Detail & Related papers (2023-09-11T02:10:07Z) - DatasetDM: Synthesizing Data with Perception Annotations Using Diffusion
Models [61.906934570771256]
We present a generic dataset generation model that can produce diverse synthetic images and perception annotations.
Our method builds upon the pre-trained diffusion model and extends text-guided image synthesis to perception data generation.
We show that the rich latent code of the diffusion model can be effectively decoded as accurate perception annotations using a decoder module.
arXiv Detail & Related papers (2023-08-11T14:38:11Z) - Obscured Wildfire Flame Detection By Temporal Analysis of Smoke Patterns
Captured by Unmanned Aerial Systems [0.799536002595393]
This research paper addresses the challenge of detecting obscured wildfires in real-time using drones equipped only with RGB cameras.
We propose a novel methodology that employs semantic segmentation based on the temporal analysis of smoke patterns in video sequences.
arXiv Detail & Related papers (2023-06-30T19:45:43Z) - Wildfire Detection Via Transfer Learning: A Survey [2.766371147936368]
This paper surveys different publicly available neural network models used for detecting wildfires using regular visible-range cameras which are placed on hilltops or forest lookout towers.
The neural network models are pre-trained on ImageNet-1K and fine-tuned on a custom wildfire dataset.
arXiv Detail & Related papers (2023-06-21T13:57:04Z) - AutoShot: A Short Video Dataset and State-of-the-Art Shot Boundary
Detection [70.99025467739715]
We release a new public Short video sHot bOundary deTection dataset, named SHOT.
SHOT consists of 853 complete short videos and 11,606 shot annotations, with 2,716 high quality shot boundary annotations in 200 test videos.
Our proposed approach, named AutoShot, achieves higher F1 scores than previous state-of-the-art approaches.
arXiv Detail & Related papers (2023-04-12T19:01:21Z) - RTMV: A Ray-Traced Multi-View Synthetic Dataset for Novel View Synthesis [104.53930611219654]
We present a large-scale synthetic dataset for novel view synthesis consisting of 300k images rendered from nearly 2000 complex scenes.
The dataset is orders of magnitude larger than existing synthetic datasets for novel view synthesis.
Using 4 distinct sources of high-quality 3D meshes, the scenes of our dataset exhibit challenging variations in camera views, lighting, shape, materials, and textures.
arXiv Detail & Related papers (2022-05-14T13:15:32Z) - FIgLib & SmokeyNet: Dataset and Deep Learning Model for Real-Time
Wildland Fire Smoke Detection [0.0]
Fire Ignition Library (FIgLib) is a publicly-available dataset of nearly 25,000 labeled wildfire smoke images.
SmokeyNet is a novel deep learning architecture usingtemporal information from camera imagery for real-time wildfire smoke detection.
When trained on the FIgLib dataset, SmokeyNet outperforms comparable baselines and rivals human performance.
arXiv Detail & Related papers (2021-12-16T03:49:58Z) - Next Day Wildfire Spread: A Machine Learning Data Set to Predict
Wildfire Spreading from Remote-Sensing Data [5.814925201882753]
Next Day Wildfire Spread' is a curated data set of historical wildfires aggregating nearly a decade of remote-sensing data across the United States.
We implement a convolutional autoencoder that takes advantage of the spatial information of this data to predict wildfire spread.
This data set can be used as a benchmark for developing wildfire propagation models based on remote sensing data for a lead time of one day.
arXiv Detail & Related papers (2021-12-04T23:28:44Z) - Pixel Difference Networks for Efficient Edge Detection [71.03915957914532]
We propose a lightweight yet effective architecture named Pixel Difference Network (PiDiNet) for efficient edge detection.
Extensive experiments on BSDS500, NYUD, and Multicue datasets are provided to demonstrate its effectiveness.
A faster version of PiDiNet with less than 0.1M parameters can still achieve comparable performance among state of the arts with 200 FPS.
arXiv Detail & Related papers (2021-08-16T10:42:59Z) - DeepDarts: Modeling Keypoints as Objects for Automatic Scorekeeping in
Darts using a Single Camera [75.34178733070547]
Existing multi-camera solutions for automatic scorekeeping in steel-tip darts are very expensive and thus inaccessible to most players.
We present a new approach to keypoint detection and apply it to predict dart scores from a single image taken from any camera angle.
We develop a deep convolutional neural network around this idea and use it to predict dart locations and dartboard calibration points.
arXiv Detail & Related papers (2021-05-20T16:25:57Z) - Fast and Accurate Camera Scene Detection on Smartphones [51.424407411660376]
This paper proposes a novel Camera Scene Detection dataset (CamSDD) containing more than 11K manually crawled images.
We propose an efficient and NPU-friendly CNN model for this task that demonstrates a top-3 accuracy of 99.5% on this dataset.
arXiv Detail & Related papers (2021-05-17T14:06:21Z) - Few-Shot Video Object Detection [70.43402912344327]
We introduce Few-Shot Video Object Detection (FSVOD) with three important contributions.
FSVOD-500 comprises of 500 classes with class-balanced videos in each category for few-shot learning.
Our TPN and TMN+ are jointly and end-to-end trained.
arXiv Detail & Related papers (2021-04-30T07:38:04Z) - Few-Shot Learning for Video Object Detection in a Transfer-Learning
Scheme [70.45901040613015]
We study the new problem of few-shot learning for video object detection.
We employ a transfer-learning framework to effectively train the video object detector on a large number of base-class objects and a few video clips of novel-class objects.
arXiv Detail & Related papers (2021-03-26T20:37:55Z) - Active Fire Detection in Landsat-8 Imagery: a Large-Scale Dataset and a
Deep-Learning Study [1.3764085113103217]
This paper introduces a new large-scale dataset for active fire detection using deep learning techniques.
We present a study on how different convolutional neural network architectures can be used to approximate handcrafted algorithms.
The proposed dataset, source codes and trained models are available on Github.
arXiv Detail & Related papers (2021-01-09T19:05:03Z) - TJU-DHD: A Diverse High-Resolution Dataset for Object Detection [48.94731638729273]
Large-scale, rich-diversity, and high-resolution datasets play an important role in developing better object detection methods.
We build a diverse high-resolution dataset (called TJU-DHD)
The dataset contains 115,354 high-resolution images and 709,330 labeled objects with a large variance in scale and appearance.
arXiv Detail & Related papers (2020-11-18T09:32:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.