A Comparative Analysis of CNN-based Deep Learning Models for Landslide Detection
- URL: http://arxiv.org/abs/2408.01692v1
- Date: Sat, 3 Aug 2024 07:20:10 GMT
- Title: A Comparative Analysis of CNN-based Deep Learning Models for Landslide Detection
- Authors: Omkar Oak, Rukmini Nazre, Soham Naigaonkar, Suraj Sawant, Himadri Vaidya,
- Abstract summary: Landslides in northern parts of India and Nepal have caused significant disruption, damaging infrastructure and posing threats to local communities.
Recent landslides in northern parts of India and Nepal have caused significant disruption, damaging infrastructure and posing threats to local communities.
CNNs, a type of deep learning technique, have shown remarkable success in image processing.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Landslides inflict substantial societal and economic damage, underscoring their global significance as recurrent and destructive natural disasters. Recent landslides in northern parts of India and Nepal have caused significant disruption, damaging infrastructure and posing threats to local communities. Convolutional Neural Networks (CNNs), a type of deep learning technique, have shown remarkable success in image processing. Because of their sophisticated architectures, advanced CNN-based models perform better in landslide detection than conventional algorithms. The purpose of this work is to investigate CNNs' potential in more detail, with an emphasis on comparison of CNN based models for better landslide detection. We compared four traditional semantic segmentation models (U-Net, LinkNet, PSPNet, and FPN) and utilized the ResNet50 backbone encoder to implement them. Moreover, we have experimented with the hyperparameters such as learning rates, batch sizes, and regularization techniques to fine-tune the models. We have computed the confusion matrix for each model and used performance metrics including precision, recall and f1-score to evaluate and compare the deep learning models. According to the experimental results, LinkNet gave the best results among the four models having an Accuracy of 97.49% and a F1-score of 85.7% (with 84.49% precision, 87.07% recall). We have also presented a comprehensive comparison of all pixel-wise confusion matrix results and the time taken to train each model.
Related papers
- Modeling & Evaluating the Performance of Convolutional Neural Networks for Classifying Steel Surface Defects [0.0]
Recently, outstanding identification rates in image classification tasks were achieved by convolutional neural networks (CNNs)
DenseNet201 had the greatest detection rate on the NEU dataset, falling in at 98.37 percent.
arXiv Detail & Related papers (2024-06-19T08:14:50Z) - Reusing Convolutional Neural Network Models through Modularization and
Composition [22.823870645316397]
We propose two modularization approaches named CNNSplitter and GradSplitter.
CNNSplitter decomposes a trained convolutional neural network (CNN) model into $N$ small reusable modules.
The resulting modules can be reused to patch existing CNN models or build new CNN models through composition.
arXiv Detail & Related papers (2023-11-08T03:18:49Z) - Uncertainty in AI: Evaluating Deep Neural Networks on
Out-of-Distribution Images [0.0]
This paper investigates the uncertainty of various deep neural networks, including ResNet-50, VGG16, DenseNet121, AlexNet, and GoogleNet, when dealing with perturbed data.
While ResNet-50 was the most accurate single model for OOD images, the ensemble performed even better, correctly classifying all images.
arXiv Detail & Related papers (2023-09-04T22:46:59Z) - Layer-wise Linear Mode Connectivity [52.6945036534469]
Averaging neural network parameters is an intuitive method for the knowledge of two independent models.
It is most prominently used in federated learning.
We analyse the performance of the models that result from averaging single, or groups.
arXiv Detail & Related papers (2023-07-13T09:39:10Z) - Part-Based Models Improve Adversarial Robustness [57.699029966800644]
We show that combining human prior knowledge with end-to-end learning can improve the robustness of deep neural networks.
Our model combines a part segmentation model with a tiny classifier and is trained end-to-end to simultaneously segment objects into parts.
Our experiments indicate that these models also reduce texture bias and yield better robustness against common corruptions and spurious correlations.
arXiv Detail & Related papers (2022-09-15T15:41:47Z) - Land Classification in Satellite Images by Injecting Traditional
Features to CNN Models [0.0]
CNN models have high accuracy in solving the land classification problem using satellite or aerial images.
Small-sized CNN models do not provide high accuracy as with their large-sized versions.
We propose a novel method to improve the accuracy of CNN models, especially the ones with small size, by injecting traditional features to them.
arXiv Detail & Related papers (2022-07-21T08:53:34Z) - Characterizing and Understanding the Behavior of Quantized Models for
Reliable Deployment [32.01355605506855]
Quantization-aware training can produce more stable models than standard, adversarial, and Mixup training.
Disagreements often have closer top-1 and top-2 output probabilities, and $Margin$ is a better indicator than the other uncertainty metrics to distinguish disagreements.
We opensource our code and models as a new benchmark for further studying the quantized models.
arXiv Detail & Related papers (2022-04-08T11:19:16Z) - Network Augmentation for Tiny Deep Learning [73.57192520534585]
We introduce Network Augmentation (NetAug), a new training method for improving the performance of tiny neural networks.
We demonstrate the effectiveness of NetAug on image classification and object detection.
arXiv Detail & Related papers (2021-10-17T18:48:41Z) - Greedy Network Enlarging [53.319011626986004]
We propose a greedy network enlarging method based on the reallocation of computations.
With step-by-step modifying the computations on different stages, the enlarged network will be equipped with optimal allocation and utilization of MACs.
With application of our method on GhostNet, we achieve state-of-the-art 80.9% and 84.3% ImageNet top-1 accuracies.
arXiv Detail & Related papers (2021-07-31T08:36:30Z) - ANNETTE: Accurate Neural Network Execution Time Estimation with Stacked
Models [56.21470608621633]
We propose a time estimation framework to decouple the architectural search from the target hardware.
The proposed methodology extracts a set of models from micro- kernel and multi-layer benchmarks and generates a stacked model for mapping and network execution time estimation.
We compare estimation accuracy and fidelity of the generated mixed models, statistical models with the roofline model, and a refined roofline model for evaluation.
arXiv Detail & Related papers (2021-05-07T11:39:05Z) - EagleEye: Fast Sub-net Evaluation for Efficient Neural Network Pruning [82.54669314604097]
EagleEye is a simple yet efficient evaluation component based on adaptive batch normalization.
It unveils a strong correlation between different pruned structures and their final settled accuracy.
This module is also general to plug-in and improve some existing pruning algorithms.
arXiv Detail & Related papers (2020-07-06T01:32:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.