3D vision-based structural masonry damage detection
- URL: http://arxiv.org/abs/2308.16380v1
- Date: Thu, 31 Aug 2023 00:48:05 GMT
- Title: 3D vision-based structural masonry damage detection
- Authors: Elmira Faraji Zonouz, Xiao Pan, Yu-Cheng Hsu, Tony Yang
- Abstract summary: We present a 3D vision-based methodology for accurate masonry damage detection.
First, images of the masonry specimens are collected to generate a 3D point cloud.
Second, 3D point clouds processing methods are developed to evaluate the masonry damage.
- Score: 6.442649108177674
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The detection of masonry damage is essential for preventing potentially
disastrous outcomes. Manual inspection can, however, take a long time and be
hazardous to human inspectors. Automation of the inspection process using novel
computer vision and machine learning algorithms can be a more efficient and
safe solution to prevent further deterioration of the masonry structures. Most
existing 2D vision-based methods are limited to qualitative damage
classification, 2D localization, and in-plane quantification. In this study, we
present a 3D vision-based methodology for accurate masonry damage detection,
which offers a more robust solution with a greater field of view, depth of
vision, and the ability to detect failures in complex environments. First,
images of the masonry specimens are collected to generate a 3D point cloud.
Second, 3D point clouds processing methods are developed to evaluate the
masonry damage. We demonstrate the effectiveness of our approach through
experiments on structural masonry components. Our experiments showed the
proposed system can effectively classify damage states and localize and
quantify critical damage features. The result showed the proposed method can
improve the level of autonomy during the inspection of masonry structures.
Related papers
- Uncertainty Estimation for 3D Object Detection via Evidential Learning [63.61283174146648]
We introduce a framework for quantifying uncertainty in 3D object detection by leveraging an evidential learning loss on Bird's Eye View representations in the 3D detector.
We demonstrate both the efficacy and importance of these uncertainty estimates on identifying out-of-distribution scenes, poorly localized objects, and missing (false negative) detections.
arXiv Detail & Related papers (2024-10-31T13:13:32Z) - Towards Unified 3D Object Detection via Algorithm and Data Unification [70.27631528933482]
We build the first unified multi-modal 3D object detection benchmark MM- Omni3D and extend the aforementioned monocular detector to its multi-modal version.
We name the designed monocular and multi-modal detectors as UniMODE and MM-UniMODE, respectively.
arXiv Detail & Related papers (2024-02-28T18:59:31Z) - AdvMono3D: Advanced Monocular 3D Object Detection with Depth-Aware
Robust Adversarial Training [64.14759275211115]
We propose a depth-aware robust adversarial training method for monocular 3D object detection, dubbed DART3D.
Our adversarial training approach capitalizes on the inherent uncertainty, enabling the model to significantly improve its robustness against adversarial attacks.
arXiv Detail & Related papers (2023-09-03T07:05:32Z) - Monocular 2D Camera-based Proximity Monitoring for Human-Machine
Collision Warning on Construction Sites [1.7223564681760168]
Accident of struck-by machines is one of the leading causes of casualties on construction sites.
Monitoring workers' proximities to avoid human-machine collisions has aroused great concern in construction safety management.
This study proposes a novel framework for proximity monitoring using only an ordinary 2D camera to realize real-time human-machine collision warning.
arXiv Detail & Related papers (2023-05-29T07:47:27Z) - MonoTDP: Twin Depth Perception for Monocular 3D Object Detection in
Adverse Scenes [49.21187418886508]
This paper proposes a monocular 3D detection model designed to perceive twin depth in adverse scenes, termed MonoTDP.
We first introduce an adaptive learning strategy to aid the model in handling uncontrollable weather conditions, significantly resisting degradation caused by various degrading factors.
Then, to address the depth/content loss in adverse regions, we propose a novel twin depth perception module that simultaneously estimates scene and object depth.
arXiv Detail & Related papers (2023-05-18T13:42:02Z) - Towards Effective Adversarial Textured 3D Meshes on Physical Face
Recognition [42.60954035488262]
The goal of this work is to develop a more reliable technique that can carry out an end-to-end evaluation of adversarial robustness for commercial systems.
We design adversarial textured 3D meshes (AT3D) with an elaborate topology on a human face, which can be 3D-printed and pasted on the attacker's face to evade the defenses.
To deviate from the mesh-based space, we propose to perturb the low-dimensional coefficient space based on 3D Morphable Model.
arXiv Detail & Related papers (2023-03-28T08:42:54Z) - On the Adversarial Robustness of Camera-based 3D Object Detection [21.091078268929667]
We investigate the robustness of leading camera-based 3D object detection approaches under various adversarial conditions.
We find that bird's-eye-view-based representations exhibit stronger robustness against localization attacks.
depth-estimation-free approaches have the potential to show stronger robustness.
incorporating multi-frame benign inputs can effectively mitigate adversarial attacks.
arXiv Detail & Related papers (2023-01-25T18:59:15Z) - Multi-view deep learning for reliable post-disaster damage
classification [0.0]
This study aims to enable more reliable automated post-disaster building damage classification using artificial intelligence (AI) and multi-view imagery.
The proposed model is trained and validated on reconnaissance visual dataset containing expert-labeled, geotagged images of the inspected buildings following hurricane Harvey.
arXiv Detail & Related papers (2022-08-06T01:04:13Z) - Geometry Uncertainty Projection Network for Monocular 3D Object
Detection [138.24798140338095]
We propose a Geometry Uncertainty Projection Network (GUP Net) to tackle the error amplification problem at both inference and training stages.
Specifically, a GUP module is proposed to obtains the geometry-guided uncertainty of the inferred depth.
At the training stage, we propose a Hierarchical Task Learning strategy to reduce the instability caused by error amplification.
arXiv Detail & Related papers (2021-07-29T06:59:07Z) - Delving into Localization Errors for Monocular 3D Object Detection [85.77319416168362]
Estimating 3D bounding boxes from monocular images is an essential component in autonomous driving.
In this work, we quantify the impact introduced by each sub-task and find the localization error' is the vital factor in restricting monocular 3D detection.
arXiv Detail & Related papers (2021-03-30T10:38:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.