A Computer Vision Enabled damage detection model with improved YOLOv5
based on Transformer Prediction Head
- URL: http://arxiv.org/abs/2303.04275v1
- Date: Tue, 7 Mar 2023 22:53:36 GMT
- Title: A Computer Vision Enabled damage detection model with improved YOLOv5
based on Transformer Prediction Head
- Authors: Arunabha M. Roy and Jayabrata Bhaduri
- Abstract summary: Current state-of-the-art deep learning (DL)-based damage detection models often lack superior feature extraction capability in complex and noisy environments.
DenseSPH-YOLOv5 is a real-time DL-based high-performance damage detection model where DenseNet blocks have been integrated with the backbone.
DenseSPH-YOLOv5 obtains a mean average precision (mAP) value of 85.25 %, F1-score of 81.18 %, and precision (P) value of 89.51 % outperforming current state-of-the-art models.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Objective:Computer vision-based up-to-date accurate damage classification and
localization are of decisive importance for infrastructure monitoring, safety,
and the serviceability of civil infrastructure. Current state-of-the-art deep
learning (DL)-based damage detection models, however, often lack superior
feature extraction capability in complex and noisy environments, limiting the
development of accurate and reliable object distinction. Method: To this end,
we present DenseSPH-YOLOv5, a real-time DL-based high-performance damage
detection model where DenseNet blocks have been integrated with the backbone to
improve in preserving and reusing critical feature information. Additionally,
convolutional block attention modules (CBAM) have been implemented to improve
attention performance mechanisms for strong and discriminating deep spatial
feature extraction that results in superior detection under various challenging
environments. Moreover, additional feature fusion layers and a Swin-Transformer
Prediction Head (SPH) have been added leveraging advanced self-attention
mechanism for more efficient detection of multiscale object sizes and
simultaneously reducing the computational complexity. Results: Evaluating the
model performance in large-scale Road Damage Dataset (RDD-2018), at a detection
rate of 62.4 FPS, DenseSPH-YOLOv5 obtains a mean average precision (mAP) value
of 85.25 %, F1-score of 81.18 %, and precision (P) value of 89.51 %
outperforming current state-of-the-art models. Significance: The present
research provides an effective and efficient damage localization model
addressing the shortcoming of existing DL-based damage detection models by
providing highly accurate localized bounding box prediction. Current work
constitutes a step towards an accurate and robust automated damage detection
system in real-time in-field applications.
Related papers
- YOLO-ELA: Efficient Local Attention Modeling for High-Performance Real-Time Insulator Defect Detection [0.0]
Existing detection methods for insulator defect identification from unmanned aerial vehicles struggle with complex background scenes and small objects.
This paper proposes a new attention-based foundation architecture, YOLO-ELA, to address this issue.
Experimental results on high-resolution UAV images show that our method achieved a state-of-the-art performance of 96.9% mAP0.5 and a real-time detection speed of 74.63 frames per second.
arXiv Detail & Related papers (2024-10-15T16:00:01Z) - AI-Powered Dynamic Fault Detection and Performance Assessment in Photovoltaic Systems [44.99833362998488]
intermittent nature of photovoltaic (PV) solar energy leads to power losses of 10-70% and an average energy production decrease of 25%.
Current fault detection strategies are costly and often yield unreliable results due to complex data signal profiles.
This research presents a computational model using the PVlib library in Python, incorporating a dynamic loss quantification algorithm.
arXiv Detail & Related papers (2024-08-19T23:52:06Z) - Accelerating Domain-Aware Electron Microscopy Analysis Using Deep Learning Models with Synthetic Data and Image-Wide Confidence Scoring [0.0]
We create a physics-based synthetic image and data generator, resulting in a machine learning model that achieves comparable precision (0.86), recall (0.63), F1 scores (0.71), and engineering property predictions (R2=0.82)
Our study demonstrates that synthetic data can eliminate human reliance in ML and provides a means for domain awareness in cases where many feature detections per image are needed.
arXiv Detail & Related papers (2024-08-02T20:15:15Z) - Structural damage detection via hierarchical damage information with volumetric assessment [1.4470320778878742]
Structural health monitoring (SHM) is essential for ensuring the safety and longevity of infrastructure.
This study introduces the Guided Detection Network (Guided-DetNet), a framework designed to address these challenges.
Guided-DetNet is characterized by a Generative Attention Module (GAM), Hierarchical Elimination Algorithm (HEA), and Volumetric Contour Visual Assessment (VCVA)
arXiv Detail & Related papers (2024-07-29T04:33:04Z) - Machine Learning for ALSFRS-R Score Prediction: Making Sense of the Sensor Data [44.99833362998488]
Amyotrophic Lateral Sclerosis (ALS) is a rapidly progressive neurodegenerative disease that presents individuals with limited treatment options.
The present investigation, spearheaded by the iDPP@CLEF 2024 challenge, focuses on utilizing sensor-derived data obtained through an app.
arXiv Detail & Related papers (2024-07-10T19:17:23Z) - Machine learning-based network intrusion detection for big and
imbalanced data using oversampling, stacking feature embedding and feature
extraction [6.374540518226326]
Intrusion Detection Systems (IDS) play a critical role in protecting interconnected networks by detecting malicious actors and activities.
This paper introduces a novel ML-based network intrusion detection model that uses Random Oversampling (RO) to address data imbalance and Stacking Feature Embedding (PCA) for dimension reduction.
Using the CIC-IDS 2017 dataset, DT, RF, and ET models reach 99.99% accuracy, while DT and RF models obtain 99.94% accuracy on CIC-IDS 2018 dataset.
arXiv Detail & Related papers (2024-01-22T05:49:41Z) - Innovative Horizons in Aerial Imagery: LSKNet Meets DiffusionDet for
Advanced Object Detection [55.2480439325792]
We present an in-depth evaluation of an object detection model that integrates the LSKNet backbone with the DiffusionDet head.
The proposed model achieves a mean average precision (MAP) of approximately 45.7%, which is a significant improvement.
This advancement underscores the effectiveness of the proposed modifications and sets a new benchmark in aerial image analysis.
arXiv Detail & Related papers (2023-11-21T19:49:13Z) - Energy-based Out-of-Distribution Detection for Graph Neural Networks [76.0242218180483]
We propose a simple, powerful and efficient OOD detection model for GNN-based learning on graphs, which we call GNNSafe.
GNNSafe achieves up to $17.0%$ AUROC improvement over state-of-the-arts and it could serve as simple yet strong baselines in such an under-developed area.
arXiv Detail & Related papers (2023-02-06T16:38:43Z) - A fast accurate fine-grain object detection model based on YOLOv4 deep
neural network [0.0]
Early identification and prevention of various plant diseases in commercial farms and orchards is a key feature of precision agriculture technology.
This paper presents a high-performance real-time fine-grain object detection framework that addresses several obstacles in plant disease detection.
The proposed model is built on an improved version of the You Only Look Once (YOLOv4) algorithm.
arXiv Detail & Related papers (2021-10-30T17:56:13Z) - Progressive Self-Guided Loss for Salient Object Detection [102.35488902433896]
We present a progressive self-guided loss function to facilitate deep learning-based salient object detection in images.
Our framework takes advantage of adaptively aggregated multi-scale features to locate and detect salient objects effectively.
arXiv Detail & Related papers (2021-01-07T07:33:38Z) - Assessing out-of-domain generalization for robust building damage
detection [78.6363825307044]
Building damage detection can be automated by applying computer vision techniques to satellite imagery.
Models must be robust to a shift in distribution between disaster imagery available for training and the images of the new event.
We argue that future work should focus on the OOD regime instead.
arXiv Detail & Related papers (2020-11-20T10:30:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.