Automatic joint damage quantification using computer vision and deep
learning
- URL: http://arxiv.org/abs/2010.15303v1
- Date: Thu, 29 Oct 2020 01:41:20 GMT
- Title: Automatic joint damage quantification using computer vision and deep
learning
- Authors: Quang Tran and Jeffery R. Roesler
- Abstract summary: Joint raveled or spalled damage (henceforth called joint damage) can affect the safety and long-term performance of concrete pavements.
It is important to assess and quantify the joint damage over time to assist in building action plans for maintenance, predicting maintenance costs, and maximize the concrete pavement service life.
A framework for the accurate, autonomous, and rapid quantification of joint damage with a low-cost camera is proposed using a computer vision technique with a deep learning (DL) algorithm.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Joint raveled or spalled damage (henceforth called joint damage) can affect
the safety and long-term performance of concrete pavements. It is important to
assess and quantify the joint damage over time to assist in building action
plans for maintenance, predicting maintenance costs, and maximize the concrete
pavement service life. A framework for the accurate, autonomous, and rapid
quantification of joint damage with a low-cost camera is proposed using a
computer vision technique with a deep learning (DL) algorithm. The DL model is
employed to train 263 images of sawcuts with joint damage. The trained DL model
is used for pixel-wise color-masking joint damage in a series of query 2D
images, which are used to reconstruct a 3D image using open-source structure
from motion algorithm. Another damage quantification algorithm using a color
threshold is applied to detect and compute the surface area of the damage in
the 3D reconstructed image. The effectiveness of the framework was validated
through inspecting joint damage at four transverse contraction joints in
Illinois, USA, including three acceptable joints and one unacceptable joint by
visual inspection. The results show the framework achieves 76% recall and 10%
error.
Related papers
- Uncertainty Estimation for 3D Object Detection via Evidential Learning [63.61283174146648]
We introduce a framework for quantifying uncertainty in 3D object detection by leveraging an evidential learning loss on Bird's Eye View representations in the 3D detector.
We demonstrate both the efficacy and importance of these uncertainty estimates on identifying out-of-distribution scenes, poorly localized objects, and missing (false negative) detections.
arXiv Detail & Related papers (2024-10-31T13:13:32Z) - 3D vision-based structural masonry damage detection [6.442649108177674]
We present a 3D vision-based methodology for accurate masonry damage detection.
First, images of the masonry specimens are collected to generate a 3D point cloud.
Second, 3D point clouds processing methods are developed to evaluate the masonry damage.
arXiv Detail & Related papers (2023-08-31T00:48:05Z) - A hierarchical semantic segmentation framework for computer vision-based
bridge damage detection [3.7642333932730634]
Computer vision-based damage detection using remote cameras and unmanned aerial vehicles (UAVs) enables efficient and low-cost bridge health monitoring.
This paper introduces a semantic segmentation framework that imposes the hierarchical semantic relationship between component category and damage types.
In this way, the damage detection model could focus on learning features from possible damaged regions only and avoid the effects of other irrelevant regions.
arXiv Detail & Related papers (2022-07-18T18:42:54Z) - Adversarially-Aware Robust Object Detector [85.10894272034135]
We propose a Robust Detector (RobustDet) based on adversarially-aware convolution to disentangle gradients for model learning on clean and adversarial images.
Our model effectively disentangles gradients and significantly enhances the detection robustness with maintaining the detection ability on clean images.
arXiv Detail & Related papers (2022-07-13T13:59:59Z) - PONet: Robust 3D Human Pose Estimation via Learning Orientations Only [116.1502793612437]
We propose a novel Pose Orientation Net (PONet) that is able to robustly estimate 3D pose by learning orientations only.
PONet estimates the 3D orientation of these limbs by taking advantage of the local image evidence to recover the 3D pose.
We evaluate our method on multiple datasets, including Human3.6M, MPII, MPI-INF-3DHP, and 3DPW.
arXiv Detail & Related papers (2021-12-21T12:48:48Z) - An Efficient and Scalable Deep Learning Approach for Road Damage
Detection [0.0]
This paper introduces a deep learning-based surveying scheme to analyze the image-based distress data in real-time.
A database consisting of a diverse population of crack distress types such as longitudinal, transverse, and alligator cracks is used.
Proposed models, resulted in F1-scores, ranging from 52% to 56%, and average inference time from 178-10 images per second.
arXiv Detail & Related papers (2020-11-18T23:05:41Z) - Kinematic-Structure-Preserved Representation for Unsupervised 3D Human
Pose Estimation [58.72192168935338]
Generalizability of human pose estimation models developed using supervision on large-scale in-studio datasets remains questionable.
We propose a novel kinematic-structure-preserved unsupervised 3D pose estimation framework, which is not restrained by any paired or unpaired weak supervisions.
Our proposed model employs three consecutive differentiable transformations named as forward-kinematics, camera-projection and spatial-map transformation.
arXiv Detail & Related papers (2020-06-24T23:56:33Z) - Synthetic Image Augmentation for Damage Region Segmentation using
Conditional GAN with Structure Edge [0.0]
We propose a synthetic augmentation procedure to generate damaged images using the image-to-image translation mapping.
We apply popular per-pixel segmentation algorithms such as the FCN-8s, SegNet, and DeepLabv3+Xception-v2.
We demonstrate that re-training a data set added with synthetic augmentation procedure make higher accuracy based on indices the mean IoU, damage region of interest IoU, precision, recall, BF score when we predict test images.
arXiv Detail & Related papers (2020-05-07T06:04:02Z) - Per-pixel Classification Rebar Exposures in Bridge Eye-inspection [0.0]
We propose three damage detection methods of transfer learning which enables semantic segmentation in an image with low pixels.
In this paper, we show the results applied using the 208 rebar exposed images on the 106 real-world bridges.
arXiv Detail & Related papers (2020-04-22T17:28:42Z) - RescueNet: Joint Building Segmentation and Damage Assessment from
Satellite Imagery [83.49145695899388]
RescueNet is a unified model that can simultaneously segment buildings and assess the damage levels to individual buildings and can be trained end-to-end.
RescueNet is tested on the large scale and diverse xBD dataset and achieves significantly better building segmentation and damage classification performance than previous methods.
arXiv Detail & Related papers (2020-04-15T19:52:09Z) - Road Curb Detection and Localization with Monocular Forward-view Vehicle
Camera [74.45649274085447]
We propose a robust method for estimating road curb 3D parameters using a calibrated monocular camera equipped with a fisheye lens.
Our approach is able to estimate the vehicle to curb distance in real time with mean accuracy of more than 90%.
arXiv Detail & Related papers (2020-02-28T00:24:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.