Structural damage detection via hierarchical damage information with volumetric assessment
- URL: http://arxiv.org/abs/2407.19694v2
- Date: Wed, 15 Jan 2025 13:53:29 GMT
- Title: Structural damage detection via hierarchical damage information with volumetric assessment
- Authors: Isaac Osei Agyemang, Isaac Adjei-Mensah, Daniel Acheampong, Gordon Owusu Boateng, Adu Asare Baffour,
- Abstract summary: Structural health monitoring (SHM) is essential for ensuring the safety and longevity of infrastructure.
This study introduces the Guided Detection Network (Guided-DetNet), a framework designed to address these challenges.
Guided-DetNet is characterized by a Generative Attention Module (GAM), Hierarchical Elimination Algorithm (HEA), and Volumetric Contour Visual Assessment (VCVA)
- Score: 1.4470320778878742
- License:
- Abstract: Structural health monitoring (SHM) is essential for ensuring the safety and longevity of infrastructure, but complex image environments, noisy labels, and reliance on manual damage assessments often hinder its effectiveness. This study introduces the Guided Detection Network (Guided-DetNet), a framework designed to address these challenges. Guided-DetNet is characterized by a Generative Attention Module (GAM), Hierarchical Elimination Algorithm (HEA), and Volumetric Contour Visual Assessment (VCVA). GAM leverages cross-horizontal and cross-vertical patch merging and cross-foreground-background feature fusion to generate varied features to mitigate complex image environments. HEA addresses noisy labeling using hierarchical relationships among classes to refine instances given an image by eliminating unlikely class instances. VCVA assesses the severity of detected damages via volumetric representation and quantification leveraging the Dirac delta distribution. A comprehensive quantitative study and two robustness tests were conducted using the PEER Hub dataset, and a drone-based application, which involved a field experiment, was conducted to substantiate Guided-DetNet's promising performances. In triple classification tasks, the framework achieved 96% accuracy, surpassing state-of-the-art classifiers by up to 3%. In dual detection tasks, it outperformed competitive detectors with a precision of 94% and a mean average precision (mAP) of 79% while maintaining a frame rate of 57.04fps, suitable for real-time applications. Additionally, robustness tests demonstrated resilience under adverse conditions, with precision scores ranging from 79% to 91%. Guided-DetNet is established as a robust and efficient framework for SHM, offering advancements in automation and precision, with the potential for widespread application in drone-based infrastructure inspections.
Related papers
- Adaptive Signal Analysis for Automated Subsurface Defect Detection Using Impact Echo in Concrete Slabs [0.0]
This pilot study presents a novel, automated, and scalable methodology for detecting subsurface defect-prone regions in concrete slabs.
The approach integrates advanced signal processing, clustering, and visual analytics to identify subsurface anomalies.
The results demonstrate the robustness of the methodology, consistently identifying defect-prone areas with minimal false positives and few missed defects.
arXiv Detail & Related papers (2024-12-23T20:05:53Z) - Uncertainty Estimation for 3D Object Detection via Evidential Learning [63.61283174146648]
We introduce a framework for quantifying uncertainty in 3D object detection by leveraging an evidential learning loss on Bird's Eye View representations in the 3D detector.
We demonstrate both the efficacy and importance of these uncertainty estimates on identifying out-of-distribution scenes, poorly localized objects, and missing (false negative) detections.
arXiv Detail & Related papers (2024-10-31T13:13:32Z) - Better Sampling, towards Better End-to-end Small Object Detection [7.7473020808686694]
Small object detection remains unsatisfactory due to limited characteristics and high density and mutual overlap.
We propose methods enhancing sampling within an end-to-end framework.
Our model demonstrates a significant enhancement, achieving a 2.9% increase in average precision (AP) over the state-of-the-art (SOTA) on the VisDrone dataset.
arXiv Detail & Related papers (2024-05-17T04:37:44Z) - Enhancing Infrared Small Target Detection Robustness with Bi-Level
Adversarial Framework [61.34862133870934]
We propose a bi-level adversarial framework to promote the robustness of detection in the presence of distinct corruptions.
Our scheme remarkably improves 21.96% IOU across a wide array of corruptions and notably promotes 4.97% IOU on the general benchmark.
arXiv Detail & Related papers (2023-09-03T06:35:07Z) - A Computer Vision Enabled damage detection model with improved YOLOv5
based on Transformer Prediction Head [0.0]
Current state-of-the-art deep learning (DL)-based damage detection models often lack superior feature extraction capability in complex and noisy environments.
DenseSPH-YOLOv5 is a real-time DL-based high-performance damage detection model where DenseNet blocks have been integrated with the backbone.
DenseSPH-YOLOv5 obtains a mean average precision (mAP) value of 85.25 %, F1-score of 81.18 %, and precision (P) value of 89.51 % outperforming current state-of-the-art models.
arXiv Detail & Related papers (2023-03-07T22:53:36Z) - SAFE: Sensitivity-Aware Features for Out-of-Distribution Object
Detection [10.306996649145464]
We show that residual convolutional layers with batch normalisation produce Sensitivity-Aware FEatures (SAFE)
SAFE is consistently powerful for distinguishing in-distribution from out-of-distribution detections.
We extract SAFE vectors for every detected object, and train a multilayer perceptron on the surrogate task of distinguishing adversarially perturbed from clean in-distribution examples.
arXiv Detail & Related papers (2022-08-29T23:57:55Z) - Engineering deep learning methods on automatic detection of damage in
infrastructure due to extreme events [0.38233569758620045]
This paper presents a few experimental studies for automated Structural Damage Detection (SDD) in extreme events using deep learning methods.
In the first study, a 152-layer Residual network (ResNet) is utilized to classify multiple classes in eight SDD tasks.
The results show that the accuracy of damage detection is significantly improved compared to only using a segmentation network.
arXiv Detail & Related papers (2022-05-01T19:55:56Z) - From Environmental Sound Representation to Robustness of 2D CNN Models
Against Adversarial Attacks [82.21746840893658]
This paper investigates the impact of different standard environmental sound representations (spectrograms) on the recognition performance and adversarial attack robustness of a victim residual convolutional neural network.
We show that while the ResNet-18 model trained on DWT spectrograms achieves a high recognition accuracy, attacking this model is relatively more costly for the adversary.
arXiv Detail & Related papers (2022-04-14T15:14:08Z) - The KFIoU Loss for Rotated Object Detection [115.334070064346]
In this paper, we argue that one effective alternative is to devise an approximate loss who can achieve trend-level alignment with SkewIoU loss.
Specifically, we model the objects as Gaussian distribution and adopt Kalman filter to inherently mimic the mechanism of SkewIoU.
The resulting new loss called KFIoU is easier to implement and works better compared with exact SkewIoU.
arXiv Detail & Related papers (2022-01-29T10:54:57Z) - Exploring Robustness of Unsupervised Domain Adaptation in Semantic
Segmentation [74.05906222376608]
We propose adversarial self-supervision UDA (or ASSUDA) that maximizes the agreement between clean images and their adversarial examples by a contrastive loss in the output space.
This paper is rooted in two observations: (i) the robustness of UDA methods in semantic segmentation remains unexplored, which pose a security concern in this field; and (ii) although commonly used self-supervision (e.g., rotation and jigsaw) benefits image tasks such as classification and recognition, they fail to provide the critical supervision signals that could learn discriminative representation for segmentation tasks.
arXiv Detail & Related papers (2021-05-23T01:50:44Z) - From Sound Representation to Model Robustness [82.21746840893658]
We investigate the impact of different standard environmental sound representations (spectrograms) on the recognition performance and adversarial attack robustness of a victim residual convolutional neural network.
Averaged over various experiments on three environmental sound datasets, we found the ResNet-18 model outperforms other deep learning architectures.
arXiv Detail & Related papers (2020-07-27T17:30:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.