Automatic Road Subsurface Distress Recognition from Ground Penetrating Radar Images using Deep Learning-based Cross-verification
- URL: http://arxiv.org/abs/2507.11081v1
- Date: Tue, 15 Jul 2025 08:23:21 GMT
- Title: Automatic Road Subsurface Distress Recognition from Ground Penetrating Radar Images using Deep Learning-based Cross-verification
- Authors: Chang Peng, Bao Yang, Meiqi Li, Ge Zhang, Hui Sun, Zhenyu Jiang,
- Abstract summary: We propose a novel cross-verification strategy with outstanding accuracy in RSD recognition.<n>The approach, integrated into an online RSD detection system, can reduce the labor of inspection by around 90%.
- Score: 10.672992684830392
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Ground penetrating radar (GPR) has become a rapid and non-destructive solution for road subsurface distress (RSD) detection. However, RSD recognition from GPR images is labor-intensive and heavily relies on inspectors' expertise. Deep learning offers the possibility for automatic RSD recognition, but its current performance is limited by two factors: Scarcity of high-quality dataset for network training and insufficient capability of network to distinguish RSD. In this study, a rigorously validated 3D GPR dataset containing 2134 samples of diverse types was constructed through field scanning. Based on the finding that the YOLO model trained with one of the three scans of GPR images exhibits varying sensitivity to specific type of RSD, we proposed a novel cross-verification strategy with outstanding accuracy in RSD recognition, achieving recall over 98.6% in field tests. The approach, integrated into an online RSD detection system, can reduce the labor of inspection by around 90%.
Related papers
- RAT: Boosting Misclassification Detection Ability without Extra Data [17.800393583230044]
In this work, we investigate the detection of misclassified inputs for image classification models from the lens of adversarial perturbation.<n>We propose to use robust radius as a confidence metric and design two efficient estimation algorithms, RR-BS and RR-Fast, for misclassification detection.<n>In experiments, our method could achieve up to 29.3% reduction on AURC and 21.62% reduction in FPR@95TPR, compared with previous methods.
arXiv Detail & Related papers (2025-03-18T23:18:55Z) - FARE: A Deep Learning-Based Framework for Radar-based Face Recognition and Out-of-distribution Detection [0.0]
We propose a novel pipeline for face recognition and out-of-distribution detection using short-range FMCW radar.<n>The proposed system utilizes Range-Doppler and micro Range-Doppler Images.<n>Our method achieves an ID classification accuracy of 99.30% and an OOD detection AUROC of 96.91%.
arXiv Detail & Related papers (2025-01-14T21:08:08Z) - Understanding and Improving Training-Free AI-Generated Image Detections with Vision Foundation Models [68.90917438865078]
Deepfake techniques for facial synthesis and editing pose serious risks for generative models.<n>In this paper, we investigate how detection performance varies across model backbones, types, and datasets.<n>We introduce Contrastive Blur, which enhances performance on facial images, and MINDER, which addresses noise type bias, balancing performance across domains.
arXiv Detail & Related papers (2024-11-28T13:04:45Z) - Deep Homography Estimation for Visual Place Recognition [49.235432979736395]
We propose a transformer-based deep homography estimation (DHE) network.
It takes the dense feature map extracted by a backbone network as input and fits homography for fast and learnable geometric verification.
Experiments on benchmark datasets show that our method can outperform several state-of-the-art methods.
arXiv Detail & Related papers (2024-02-25T13:22:17Z) - Long-Tailed 3D Detection via Multi-Modal Fusion [47.03801888003686]
We study the problem of Long-Tailed 3D Detection (LT3D), which evaluates all annotated classes, including those in-the-tail.
We point out that rare-class accuracy is particularly improved via multi-modal late fusion (MMLF) of independently trained uni-modal LiDAR and RGB detectors.
Our proposed MMLF approach significantly improves LT3D performance over prior work, particularly improving rare class performance from 12.8 to 20.0 mAP!
arXiv Detail & Related papers (2023-12-18T07:14:25Z) - Multi-View Fusion and Distillation for Subgrade Distresses Detection
based on 3D-GPR [19.49863426864145]
We introduce a novel methodology for the subgrade distress detection task by leveraging the multi-view information from 3D-GPR data.
We develop a novel textbfMulti-textbfView textbfVusion and textbfDistillation framework, textbfGPR-MVFD, specifically designed to optimally utilize the multi-view GPR dataset.
arXiv Detail & Related papers (2023-08-09T08:06:28Z) - Boosting 3D Object Detection by Simulating Multimodality on Point Clouds [51.87740119160152]
This paper presents a new approach to boost a single-modality (LiDAR) 3D object detector by teaching it to simulate features and responses that follow a multi-modality (LiDAR-image) detector.
The approach needs LiDAR-image data only when training the single-modality detector, and once well-trained, it only needs LiDAR data at inference.
Experimental results on the nuScenes dataset show that our approach outperforms all SOTA LiDAR-only 3D detectors.
arXiv Detail & Related papers (2022-06-30T01:44:30Z) - Radar Guided Dynamic Visual Attention for Resource-Efficient RGB Object
Detection [10.983063391496543]
We propose a novel radar-guided spatial attention for RGB images to improve the perception quality of autonomous vehicles.
Our method improves the perception of small and long range objects, which are often not detected by the object detectors in RGB mode.
arXiv Detail & Related papers (2022-06-03T18:29:55Z) - ReDFeat: Recoupling Detection and Description for Multimodal Feature
Learning [51.07496081296863]
We recouple independent constraints of detection and description of multimodal feature learning with a mutual weighting strategy.
We propose a detector that possesses a large receptive field and is equipped with learnable non-maximum suppression layers.
We build a benchmark that contains cross visible, infrared, near-infrared and synthetic aperture radar image pairs for evaluating the performance of features in feature matching and image registration tasks.
arXiv Detail & Related papers (2022-05-16T04:24:22Z) - Outlier-based Autism Detection using Longitudinal Structural MRI [6.311381904410801]
This paper proposes structural Magnetic Resonance Imaging (sMRI)-based Autism Spectrum Disorder diagnosis via an outlier detection approach.
Generative Adversarial Network (GAN) is trained exclusively with sMRI scans of healthy subjects.
Experiments reveal that our ASD detection framework performs comparably with the state-of-the-art with far fewer training data.
arXiv Detail & Related papers (2022-02-21T04:37:25Z) - Oriented R-CNN for Object Detection [61.78746189807462]
This work proposes an effective and simple oriented object detection framework, termed Oriented R-CNN.
In the first stage, we propose an oriented Region Proposal Network (oriented RPN) that directly generates high-quality oriented proposals in a nearly cost-free manner.
The second stage is oriented R-CNN head for refining oriented Regions of Interest (oriented RoIs) and recognizing them.
arXiv Detail & Related papers (2021-08-12T12:47:43Z) - Collaborative Training between Region Proposal Localization and
Classification for Domain Adaptive Object Detection [121.28769542994664]
Domain adaptation for object detection tries to adapt the detector from labeled datasets to unlabeled ones for better performance.
In this paper, we are the first to reveal that the region proposal network (RPN) and region proposal classifier(RPC) demonstrate significantly different transferability when facing large domain gap.
arXiv Detail & Related papers (2020-09-17T07:39:52Z) - UC-Net: Uncertainty Inspired RGB-D Saliency Detection via Conditional
Variational Autoencoders [81.5490760424213]
We propose the first framework (UCNet) to employ uncertainty for RGB-D saliency detection by learning from the data labeling process.
Inspired by the saliency data labeling process, we propose probabilistic RGB-D saliency detection network.
arXiv Detail & Related papers (2020-04-13T04:12:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.