Underground Mapping and Localization Based on Ground-Penetrating Radar
- URL: http://arxiv.org/abs/2409.16446v1
- Date: Tue, 24 Sep 2024 20:26:16 GMT
- Title: Underground Mapping and Localization Based on Ground-Penetrating Radar
- Authors: Jinchang Zhang, Guoyu Lu,
- Abstract summary: This paper introduces a parabolic signal detection network based on deep convolutional neural networks, utilizing B-scan images from GPR sensors.
The detected keypoints can aid in accurately fitting parabolic curves used to interpret the original GPR B-scan images as cross-sections of the object model.
For unknown locations, GPR A-scan data can be used to match corresponding A-scan data in the constructed map, pinpointing the position to verify the accuracy of the map construction by the model.
- Score: 11.737279515161505
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: 3D object reconstruction based on deep neural networks has gained increasing attention in recent years. However, 3D reconstruction of underground objects to generate point cloud maps remains a challenge. Ground Penetrating Radar (GPR) is one of the most powerful and extensively used tools for detecting and locating underground objects such as plant root systems and pipelines, with its cost-effectiveness and continuously evolving technology. This paper introduces a parabolic signal detection network based on deep convolutional neural networks, utilizing B-scan images from GPR sensors. The detected keypoints can aid in accurately fitting parabolic curves used to interpret the original GPR B-scan images as cross-sections of the object model. Additionally, a multi-task point cloud network was designed to perform both point cloud segmentation and completion simultaneously, filling in sparse point cloud maps. For unknown locations, GPR A-scan data can be used to match corresponding A-scan data in the constructed map, pinpointing the position to verify the accuracy of the map construction by the model. Experimental results demonstrate the effectiveness of our method.
Related papers
- PointRegGPT: Boosting 3D Point Cloud Registration using Generative Point-Cloud Pairs for Training [90.06520673092702]
We present PointRegGPT, boosting 3D point cloud registration using generative point-cloud pairs for training.
To our knowledge, this is the first generative approach that explores realistic data generation for indoor point cloud registration.
arXiv Detail & Related papers (2024-07-19T06:29:57Z) - HVDistill: Transferring Knowledge from Images to Point Clouds via Unsupervised Hybrid-View Distillation [106.09886920774002]
We present a hybrid-view-based knowledge distillation framework, termed HVDistill, to guide the feature learning of a point cloud neural network.
Our method achieves consistent improvements over the baseline trained from scratch and significantly out- performs the existing schemes.
arXiv Detail & Related papers (2024-03-18T14:18:08Z) - Flattening-Net: Deep Regular 2D Representation for 3D Point Cloud
Analysis [66.49788145564004]
We present an unsupervised deep neural architecture called Flattening-Net to represent irregular 3D point clouds of arbitrary geometry and topology.
Our methods perform favorably against the current state-of-the-art competitors.
arXiv Detail & Related papers (2022-12-17T15:05:25Z) - DeepPoint: A Deep Learning Model for 3D Reconstruction in Point Clouds
via mmWave Radar [10.119506666546663]
We introduce in this paper DeepPoint, a deep learning model that generates 3D objects in point cloud format.
It takes as input the 2D depth images of an object generated by 3DRIMR's Stage 1, and outputs smooth and dense 3D point clouds of the object.
Our experiments have demonstrated that this model significantly outperforms the original 3DRIMR and other standard techniques in reconstructing 3D objects.
arXiv Detail & Related papers (2021-09-19T18:28:20Z) - 3DRIMR: 3D Reconstruction and Imaging via mmWave Radar based on Deep
Learning [9.26903816093995]
mmWave radar has been shown as an effective sensing technique in low visibility, smoke, dusty, and dense fog environment.
We propose 3D Reconstruction and Imaging via mmWave Radar (3DRIMR), a deep learning based architecture that reconstructs 3D shape of an object in dense detailed point cloud format.
Our experiments have demonstrated 3DRIMR's effectiveness in reconstructing 3D objects, and its performance improvement over standard techniques.
arXiv Detail & Related papers (2021-08-05T21:24:57Z) - Robotic Inspection and 3D GPR-based Reconstruction for Underground
Utilities [11.601407791322327]
Ground Penetrating Radar (GPR) is an effective non-destructive evaluation (NDE) device for inspecting and surveying subsurface objects.
The current practice for GPR data collection requires a human inspector to move a GPR cart along pre-marked grid lines.
This paper presents a novel robotic system to collect GPR data, interpret GPR data, localize the underground utilities, reconstruct and visualize the underground objects' dense point cloud model.
arXiv Detail & Related papers (2021-06-03T14:58:49Z) - VR3Dense: Voxel Representation Learning for 3D Object Detection and
Monocular Dense Depth Reconstruction [0.951828574518325]
We introduce a method for jointly training 3D object detection and monocular dense depth reconstruction neural networks.
It takes as inputs, a LiDAR point-cloud, and a single RGB image during inference and produces object pose predictions as well as a densely reconstructed depth map.
While our object detection is trained in a supervised manner, the depth prediction network is trained with both self-supervised and supervised loss functions.
arXiv Detail & Related papers (2021-04-13T04:25:54Z) - Sparse Auxiliary Networks for Unified Monocular Depth Prediction and
Completion [56.85837052421469]
Estimating scene geometry from data obtained with cost-effective sensors is key for robots and self-driving cars.
In this paper, we study the problem of predicting dense depth from a single RGB image with optional sparse measurements from low-cost active depth sensors.
We introduce Sparse Networks (SANs), a new module enabling monodepth networks to perform both the tasks of depth prediction and completion.
arXiv Detail & Related papers (2021-03-30T21:22:26Z) - GPR-based Model Reconstruction System for Underground Utilities Using
GPRNet [12.334006660346935]
Ground Penetrating Radar (GPR) is one of the most important non-destructive evaluation (NDE) instruments to detect and locate underground objects.
Previous researches focus on GPR image-based feature detection only, and none can process sparse GPR measurements to reconstruct a fine and detailed 3D model of underground objects for better visualization.
This paper presents a novel robotic system to collect GPR data, localize the underground utilities, and reconstruct the underground objects' dense point cloud model.
arXiv Detail & Related papers (2020-11-05T03:26:01Z) - Cross-Modality 3D Object Detection [63.29935886648709]
We present a novel two-stage multi-modal fusion network for 3D object detection.
The whole architecture facilitates two-stage fusion.
Our experiments on the KITTI dataset show that the proposed multi-stage fusion helps the network to learn better representations.
arXiv Detail & Related papers (2020-08-16T11:01:20Z) - Expandable YOLO: 3D Object Detection from RGB-D Images [64.14512458954344]
This paper aims at constructing a light-weight object detector that inputs a depth and a color image from a stereo camera.
By extending the network architecture of YOLOv3 to 3D in the middle, it is possible to output in the depth direction.
Intersection over Uninon (IoU) in 3D space is introduced to confirm the accuracy of region extraction results.
arXiv Detail & Related papers (2020-06-26T07:32:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.