A vision-based autonomous UAV inspection framework for unknown tunnel
construction sites with dynamic obstacles
- URL: http://arxiv.org/abs/2301.08422v3
- Date: Fri, 12 Jan 2024 23:53:40 GMT
- Title: A vision-based autonomous UAV inspection framework for unknown tunnel
construction sites with dynamic obstacles
- Authors: Zhefan Xu, Baihan Chen, Xiaoyang Zhan, Yumeng Xiu, Christopher Suzuki,
Kenji Shimada
- Abstract summary: This paper proposes a vision-based UAV inspection framework for dynamic tunnel environments.
Our framework contains a novel dynamic map module that can simultaneously track dynamic obstacles and represent static obstacles.
Our flight experiments in a real tunnel prove that our method can autonomously inspect the tunnel excavation front surface.
- Score: 7.340017786387768
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Tunnel construction using the drill-and-blast method requires the 3D
measurement of the excavation front to evaluate underbreak locations.
Considering the inspection and measurement task's safety, cost, and efficiency,
deploying lightweight autonomous robots, such as unmanned aerial vehicles
(UAV), becomes more necessary and popular. Most of the previous works use a
prior map for inspection viewpoint determination and do not consider dynamic
obstacles. To maximally increase the level of autonomy, this paper proposes a
vision-based UAV inspection framework for dynamic tunnel environments without
using a prior map. Our approach utilizes a hierarchical planning scheme,
decomposing the inspection problem into different levels. The high-level
decision maker first determines the task for the robot and generates the target
point. Then, the mid-level path planner finds the waypoint path and optimizes
the collision-free static trajectory. Finally, the static trajectory will be
fed into the low-level local planner to avoid dynamic obstacles and navigate to
the target point. Besides, our framework contains a novel dynamic map module
that can simultaneously track dynamic obstacles and represent static obstacles
based on an RGB-D camera. After inspection, the Structure-from-Motion (SfM)
pipeline is applied to generate the 3D shape of the target. To our best
knowledge, this is the first time autonomous inspection has been realized in
unknown and dynamic tunnel environments. Our flight experiments in a real
tunnel prove that our method can autonomously inspect the tunnel excavation
front surface. Our software is available on GitHub as an open-source ROS
package.
Related papers
- OccNeRF: Advancing 3D Occupancy Prediction in LiDAR-Free Environments [77.0399450848749]
We propose an OccNeRF method for training occupancy networks without 3D supervision.
We parameterize the reconstructed occupancy fields and reorganize the sampling strategy to align with the cameras' infinite perceptive range.
For semantic occupancy prediction, we design several strategies to polish the prompts and filter the outputs of a pretrained open-vocabulary 2D segmentation model.
arXiv Detail & Related papers (2023-12-14T18:58:52Z) - Vision-aided UAV navigation and dynamic obstacle avoidance using
gradient-based B-spline trajectory optimization [7.874708385247353]
This paper proposes a gradient-based B-spline trajectory optimization algorithm utilizing the robot's onboard vision.
The proposed optimization first adopts the circle-based guide-point algorithm to approximate the costs and gradients for avoiding static obstacles.
With the vision-detected moving objects, our receding-horizon distance field is simultaneously used to prevent dynamic collisions.
arXiv Detail & Related papers (2022-09-15T02:12:30Z) - Incremental 3D Scene Completion for Safe and Efficient Exploration
Mapping and Planning [60.599223456298915]
We propose a novel way to integrate deep learning into exploration by leveraging 3D scene completion for informed, safe, and interpretable mapping and planning.
We show that our method can speed up coverage of an environment by 73% compared to the baselines with only minimal reduction in map accuracy.
Even if scene completions are not included in the final map, we show that they can be used to guide the robot to choose more informative paths, speeding up the measurement of the scene with the robot's sensors by 35%.
arXiv Detail & Related papers (2022-08-17T14:19:33Z) - ViKiNG: Vision-Based Kilometer-Scale Navigation with Geographic Hints [94.60414567852536]
Long-range navigation requires both planning and reasoning about local traversability.
We propose a learning-based approach that integrates learning and planning.
ViKiNG can leverage its image-based learned controller and goal-directed to navigate to goals up to 3 kilometers away.
arXiv Detail & Related papers (2022-02-23T02:14:23Z) - Anchor-free 3D Single Stage Detector with Mask-Guided Attention for
Point Cloud [79.39041453836793]
We develop a novel single-stage 3D detector for point clouds in an anchor-free manner.
We overcome this by converting the voxel-based sparse 3D feature volumes into the sparse 2D feature maps.
We propose an IoU-based detection confidence re-calibration scheme to improve the correlation between the detection confidence score and the accuracy of the bounding box regression.
arXiv Detail & Related papers (2021-08-08T13:42:13Z) - Learnable Online Graph Representations for 3D Multi-Object Tracking [156.58876381318402]
We propose a unified and learning based approach to the 3D MOT problem.
We employ a Neural Message Passing network for data association that is fully trainable.
We show the merit of the proposed approach on the publicly available nuScenes dataset by achieving state-of-the-art performance of 65.6% AMOTA and 58% fewer ID-switches.
arXiv Detail & Related papers (2021-04-23T17:59:28Z) - Three dimensional unique identifier based automated georeferencing and
coregistration of point clouds in underground environment [0.0]
This study aims at overcoming practical challenges in underground or indoor laser scanning.
The developed approach involves automatically and uniquely identifiable three dimensional unique identifiers (3DUIDs) in laser scans and a 3D registration (3DReG) workflow.
The developed 3DUID can be used in roadway profile extraction, guided automation, sensor calibration, reference targets for routine survey and deformation monitoring.
arXiv Detail & Related papers (2021-02-22T01:47:50Z) - Domain Adaptation for Outdoor Robot Traversability Estimation from RGB
data with Safety-Preserving Loss [12.697106921197701]
We present an approach based on deep learning to estimate and anticipate the traversing score of different routes in the field of view of an on-board RGB camera.
We then enhance the model's capabilities by addressing domain shifts through gradient-reversal unsupervised adaptation.
Experimental results show that our approach is able to satisfactorily identify traversable areas and to generalize to unseen locations.
arXiv Detail & Related papers (2020-09-16T09:19:33Z) - Integration of the 3D Environment for UAV Onboard Visual Object Tracking [7.652259812856325]
Single visual object tracking from an unmanned aerial vehicle poses fundamental challenges.
We introduce a pipeline that combines a model-free visual object tracker, a sparse 3D reconstruction, and a state estimator.
By representing the position of the target in 3D space rather than in image space, we stabilize the tracking during ego-motion.
arXiv Detail & Related papers (2020-08-06T18:37:29Z) - Risk-Averse MPC via Visual-Inertial Input and Recurrent Networks for
Online Collision Avoidance [95.86944752753564]
We propose an online path planning architecture that extends the model predictive control (MPC) formulation to consider future location uncertainties.
Our algorithm combines an object detection pipeline with a recurrent neural network (RNN) which infers the covariance of state estimates.
The robustness of our methods is validated on complex quadruped robot dynamics and can be generally applied to most robotic platforms.
arXiv Detail & Related papers (2020-07-28T07:34:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.