Comprehensive and Comparative Analysis between Transfer Learning and Custom Built VGG and CNN-SVM Models for Wildfire Detection
- URL: http://arxiv.org/abs/2411.08171v1
- Date: Tue, 12 Nov 2024 20:30:23 GMT
- Title: Comprehensive and Comparative Analysis between Transfer Learning and Custom Built VGG and CNN-SVM Models for Wildfire Detection
- Authors: Aditya V. Jonnalagadda, Hashim A. Hashim, Andrew Harris,
- Abstract summary: This paper examines the efficiency and effectiveness of transfer learning in the context of wildfire detection.
Three purpose-built models -- Visual Geometry Group (VGG)-7, VGG-10, and Convolutional Neural Network (CNN)-Support Vector Machine(SVM) CNN-SVM -- are rigorously compared.
We trained and evaluated these models using a dataset that captures the complexities of wildfires.
- Score: 1.8616107180090005
- License:
- Abstract: Contemporary Artificial Intelligence (AI) and Machine Learning (ML) research places a significant emphasis on transfer learning, showcasing its transformative potential in enhancing model performance across diverse domains. This paper examines the efficiency and effectiveness of transfer learning in the context of wildfire detection. Three purpose-built models -- Visual Geometry Group (VGG)-7, VGG-10, and Convolutional Neural Network (CNN)-Support Vector Machine(SVM) CNN-SVM -- are rigorously compared with three pretrained models -- VGG-16, VGG-19, and Residual Neural Network (ResNet) ResNet101. We trained and evaluated these models using a dataset that captures the complexities of wildfires, incorporating variables such as varying lighting conditions, time of day, and diverse terrains. The objective is to discern how transfer learning performs against models trained from scratch in addressing the intricacies of the wildfire detection problem. By assessing the performance metrics, including accuracy, precision, recall, and F1 score, a comprehensive understanding of the advantages and disadvantages of transfer learning in this specific domain is obtained. This study contributes valuable insights to the ongoing discourse, guiding future directions in AI and ML research. Keywords: Wildfire prediction, deep learning, machine learning fire, detection
Related papers
- Utilizing Transfer Learning and pre-trained Models for Effective Forest Fire Detection: A Case Study of Uttarakhand [17.487540572548337]
Forest fires pose a significant threat to the environment, human life, and property.
Traditional forest fire detection methods are often hindered by our reliability on manual observation and satellite imagery.
This paper emphasizes the role of transfer learning in enhancing forest fire detection in India.
arXiv Detail & Related papers (2024-10-09T10:21:45Z) - Explainable AI Integrated Feature Engineering for Wildfire Prediction [1.7934287771173114]
We conducted a thorough assessment of various machine learning algorithms for both classification and regression tasks relevant to predicting wildfires.
For classifying different types or stages of wildfires, the XGBoost model outperformed others in terms of accuracy and robustness.
The Random Forest regression model showed superior results in predicting the extent of wildfire-affected areas.
arXiv Detail & Related papers (2024-04-01T21:12:44Z) - Performance Analysis of Support Vector Machine (SVM) on Challenging
Datasets for Forest Fire Detection [0.0]
This article examines the performance and utilization of Support Vector Machines (SVMs) for the critical task of forest fire detection using image datasets.
SVMs exhibit proficiency in recognizing patterns associated with fire within images.
The knowledge gained from this study aids in the development of efficient forest fire detection systems.
arXiv Detail & Related papers (2024-01-23T17:20:52Z) - Distilling Knowledge from CNN-Transformer Models for Enhanced Human
Action Recognition [1.8722948221596285]
The research aims to enhance the performance and efficiency of smaller student models by transferring knowledge from larger teacher models.
The proposed method employs a Transformer vision network as the student model, while a convolutional network serves as the teacher model.
The Vision Transformer (ViT) architecture is introduced as a robust framework for capturing global dependencies in images.
arXiv Detail & Related papers (2023-11-02T14:57:58Z) - Classification of structural building damage grades from multi-temporal
photogrammetric point clouds using a machine learning model trained on
virtual laser scanning data [58.720142291102135]
We present a novel approach to automatically assess multi-class building damage from real-world point clouds.
We use a machine learning model trained on virtual laser scanning (VLS) data.
The model yields high multi-target classification accuracies (overall accuracy: 92.0% - 95.1%)
arXiv Detail & Related papers (2023-02-24T12:04:46Z) - SSMTL++: Revisiting Self-Supervised Multi-Task Learning for Video
Anomaly Detection [108.57862846523858]
We revisit the self-supervised multi-task learning framework, proposing several updates to the original method.
We modernize the 3D convolutional backbone by introducing multi-head self-attention modules.
In our attempt to further improve the model, we study additional self-supervised learning tasks, such as predicting segmentation maps.
arXiv Detail & Related papers (2022-07-16T19:25:41Z) - Revisiting Classifier: Transferring Vision-Language Models for Video
Recognition [102.93524173258487]
Transferring knowledge from task-agnostic pre-trained deep models for downstream tasks is an important topic in computer vision research.
In this study, we focus on transferring knowledge for video classification tasks.
We utilize the well-pretrained language model to generate good semantic target for efficient transferring learning.
arXiv Detail & Related papers (2022-07-04T10:00:47Z) - An Empirical Investigation of Commonsense Self-Supervision with
Knowledge Graphs [67.23285413610243]
Self-supervision based on the information extracted from large knowledge graphs has been shown to improve the generalization of language models.
We study the effect of knowledge sampling strategies and sizes that can be used to generate synthetic data for adapting language models.
arXiv Detail & Related papers (2022-05-21T19:49:04Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Semantics-aware Adaptive Knowledge Distillation for Sensor-to-Vision
Action Recognition [131.6328804788164]
We propose a framework, named Semantics-aware Adaptive Knowledge Distillation Networks (SAKDN), to enhance action recognition in vision-sensor modality (videos)
The SAKDN uses multiple wearable-sensors as teacher modalities and uses RGB videos as student modality.
arXiv Detail & Related papers (2020-09-01T03:38:31Z) - A Convolutional Deep Markov Model for Unsupervised Speech Representation
Learning [32.59760685342343]
Probabilistic Latent Variable Models provide an alternative to self-supervised learning approaches for linguistic representation learning from speech.
In this work, we propose ConvDMM, a Gaussian state-space model with non-linear emission and transition functions modelled by deep neural networks.
When trained on a large scale speech dataset (LibriSpeech), ConvDMM produces features that significantly outperform multiple self-supervised feature extracting methods.
arXiv Detail & Related papers (2020-06-03T21:50:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.