Per-pixel Classification Rebar Exposures in Bridge Eye-inspection
- URL: http://arxiv.org/abs/2004.12805v1
- Date: Wed, 22 Apr 2020 17:28:42 GMT
- Title: Per-pixel Classification Rebar Exposures in Bridge Eye-inspection
- Authors: Takato Yasuno, Nakajima Michihiro, and Noda Kazuhiro
- Abstract summary: We propose three damage detection methods of transfer learning which enables semantic segmentation in an image with low pixels.
In this paper, we show the results applied using the 208 rebar exposed images on the 106 real-world bridges.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Efficient inspection and accurate diagnosis are required for civil
infrastructures with 50 years since completion. Especially in municipalities,
the shortage of technical staff and budget constraints on repair expenses have
become a critical problem. If we can detect damaged photos automatically
per-pixels from the record of the inspection record in addition to the 5-step
judgment and countermeasure classification of eye-inspection vision, then it is
possible that countermeasure information can be provided more flexibly, whether
we need to repair and how large the expose of damage interest. A piece of
damage photo is often sparse as long as it is not zoomed around damage, exactly
the range where the detection target is photographed, is at most only 1%.
Generally speaking, rebar exposure is frequently occurred, and there are many
opportunities to judge repair measure. In this paper, we propose three damage
detection methods of transfer learning which enables semantic segmentation in
an image with low pixels using damaged photos of human eye-inspection. Also, we
tried to create a deep convolutional network from scratch with the
preprocessing that random crops with rotations are generated. In fact, we show
the results applied this method using the 208 rebar exposed images on the 106
real-world bridges. Finally, future tasks of damage detection modeling are
mentioned.
Related papers
- Uncertainty Estimation for 3D Object Detection via Evidential Learning [63.61283174146648]
We introduce a framework for quantifying uncertainty in 3D object detection by leveraging an evidential learning loss on Bird's Eye View representations in the 3D detector.
We demonstrate both the efficacy and importance of these uncertainty estimates on identifying out-of-distribution scenes, poorly localized objects, and missing (false negative) detections.
arXiv Detail & Related papers (2024-10-31T13:13:32Z) - Visual Context-Aware Person Fall Detection [52.49277799455569]
We present a segmentation pipeline to semi-automatically separate individuals and objects in images.
Background objects such as beds, chairs, or wheelchairs can challenge fall detection systems, leading to false positive alarms.
We demonstrate that object-specific contextual transformations during training effectively mitigate this challenge.
arXiv Detail & Related papers (2024-04-11T19:06:36Z) - Learning Heavily-Degraded Prior for Underwater Object Detection [59.5084433933765]
This paper seeks transferable prior knowledge from detector-friendly images.
It is based on statistical observations that, the heavily degraded regions of detector-friendly (DFUI) and underwater images have evident feature distribution gaps.
Our method with higher speeds and less parameters still performs better than transformer-based detectors.
arXiv Detail & Related papers (2023-08-24T12:32:46Z) - MMNet: Multi-Collaboration and Multi-Supervision Network for Sequential
Deepfake Detection [81.59191603867586]
Sequential deepfake detection aims to identify forged facial regions with the correct sequence for recovery.
The recovery of forged images requires knowledge of the manipulation model to implement inverse transformations.
We propose Multi-Collaboration and Multi-Supervision Network (MMNet) that handles various spatial scales and sequential permutations in forged face images.
arXiv Detail & Related papers (2023-07-06T02:32:08Z) - A hierarchical semantic segmentation framework for computer vision-based
bridge damage detection [3.7642333932730634]
Computer vision-based damage detection using remote cameras and unmanned aerial vehicles (UAVs) enables efficient and low-cost bridge health monitoring.
This paper introduces a semantic segmentation framework that imposes the hierarchical semantic relationship between component category and damage types.
In this way, the damage detection model could focus on learning features from possible damaged regions only and avoid the effects of other irrelevant regions.
arXiv Detail & Related papers (2022-07-18T18:42:54Z) - An Efficient and Scalable Deep Learning Approach for Road Damage
Detection [0.0]
This paper introduces a deep learning-based surveying scheme to analyze the image-based distress data in real-time.
A database consisting of a diverse population of crack distress types such as longitudinal, transverse, and alligator cracks is used.
Proposed models, resulted in F1-scores, ranging from 52% to 56%, and average inference time from 178-10 images per second.
arXiv Detail & Related papers (2020-11-18T23:05:41Z) - Automatic joint damage quantification using computer vision and deep
learning [0.0]
Joint raveled or spalled damage (henceforth called joint damage) can affect the safety and long-term performance of concrete pavements.
It is important to assess and quantify the joint damage over time to assist in building action plans for maintenance, predicting maintenance costs, and maximize the concrete pavement service life.
A framework for the accurate, autonomous, and rapid quantification of joint damage with a low-cost camera is proposed using a computer vision technique with a deep learning (DL) algorithm.
arXiv Detail & Related papers (2020-10-29T01:41:20Z) - Towards Image-based Automatic Meter Reading in Unconstrained Scenarios:
A Robust and Efficient Approach [60.63996472100845]
We present an end-to-end approach for Automatic Meter Reading (AMR) focusing on unconstrained scenarios.
Our main contribution is the insertion of a new stage in the AMR pipeline, called corner detection and counter classification.
We show that our AMR system achieves impressive recognition rates (i.e., > 99%) when rejecting readings made with lower confidence values.
arXiv Detail & Related papers (2020-09-21T21:21:23Z) - Synthetic Image Augmentation for Damage Region Segmentation using
Conditional GAN with Structure Edge [0.0]
We propose a synthetic augmentation procedure to generate damaged images using the image-to-image translation mapping.
We apply popular per-pixel segmentation algorithms such as the FCN-8s, SegNet, and DeepLabv3+Xception-v2.
We demonstrate that re-training a data set added with synthetic augmentation procedure make higher accuracy based on indices the mean IoU, damage region of interest IoU, precision, recall, BF score when we predict test images.
arXiv Detail & Related papers (2020-05-07T06:04:02Z) - Occluded Prohibited Items Detection: an X-ray Security Inspection
Benchmark and De-occlusion Attention Module [50.75589128518707]
We contribute the first high-quality object detection dataset for security inspection, named OPIXray.
OPIXray focused on the widely-occurred prohibited item "cutter", annotated manually by professional inspectors from the international airport.
We propose the De-occlusion Attention Module (DOAM), a plug-and-play module that can be easily inserted into and thus promote most popular detectors.
arXiv Detail & Related papers (2020-04-18T16:10:55Z) - Comparison of object detection methods for crop damage assessment using
deep learning [0.0]
The goal of this study was a proof-of-concept to detect damaged crop areas from aerial imagery using computer vision and deep learning techniques.
An unmanned aerial system (UAS) equipped with a RGB camera was used for image acquisition.
Three popular object detectors (Faster R-CNN, YOLOv2, and RetinaNet) were assessed for their ability to detect damaged regions in a field.
YOLOv2 and RetinaNet were able to detect crop damage across multiple late-season growth stages. Faster R-CNN was not successful as the other two advanced detectors.
arXiv Detail & Related papers (2019-12-31T06:54:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.