On Enhancing Ground Surface Detection from Sparse Lidar Point Cloud
- URL: http://arxiv.org/abs/2105.11649v1
- Date: Tue, 25 May 2021 03:58:18 GMT
- Title: On Enhancing Ground Surface Detection from Sparse Lidar Point Cloud
- Authors: Bo Li
- Abstract summary: This paper proposes ground detection techniques applicable to much sparser point cloud captured by lidars with low beam resolution.
The approach is based on the RANSAC scheme of plane fitting.
- Score: 6.577622354490276
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Ground surface detection in point cloud is widely used as a key module in
autonomous driving systems. Different from previous approaches which are mostly
developed for lidars with high beam resolution, e.g. Velodyne HDL-64, this
paper proposes ground detection techniques applicable to much sparser point
cloud captured by lidars with low beam resolution, e.g. Velodyne VLP-16. The
approach is based on the RANSAC scheme of plane fitting. Inlier verification
for plane hypotheses is enhanced by exploiting the point-wise tangent, which is
a local feature available to compute regardless of the density of lidar beams.
Ground surface which is not perfectly planar is fitted by multiple
(specifically 4 in our implementation) disjoint plane regions. By assuming
these plane regions to be rectanglar and exploiting the integral image
technique, our approach approximately finds the optimal region partition and
plane hypotheses under the RANSAC scheme with real-time computational
complexity.
Related papers
- Plane2Depth: Hierarchical Adaptive Plane Guidance for Monocular Depth Estimation [38.81275292687583]
We propose Plane2Depth, which adaptively utilizes plane information to improve depth prediction within a hierarchical framework.
In the proposed plane guided depth generator (PGDG), we design a set of plane queries as prototypes to softly model planes in the scene and predict per-pixel plane coefficients.
In the proposed adaptive plane query aggregation (APGA) module, we introduce a novel feature interaction approach to improve the aggregation of multi-scale plane features.
arXiv Detail & Related papers (2024-09-04T07:45:06Z) - Arbitrary-Scale Point Cloud Upsampling by Voxel-Based Network with
Latent Geometric-Consistent Learning [52.825441454264585]
We propose an arbitrary-scale Point cloud Upsampling framework using Voxel-based Network (textbfPU-VoxelNet)
Thanks to the completeness and regularity inherited from the voxel representation, voxel-based networks are capable of providing predefined grid space to approximate 3D surface.
A density-guided grid resampling method is developed to generate high-fidelity points while effectively avoiding sampling outliers.
arXiv Detail & Related papers (2024-03-08T07:31:14Z) - Ternary-Type Opacity and Hybrid Odometry for RGB NeRF-SLAM [58.736472371951955]
We introduce a ternary-type opacity (TT) model, which categorizes points on a ray intersecting a surface into three regions: before, on, and behind the surface.
This enables a more accurate rendering of depth, subsequently improving the performance of image warping techniques.
Our integrated approach of TT and HO achieves state-of-the-art performance on synthetic and real-world datasets.
arXiv Detail & Related papers (2023-12-20T18:03:17Z) - An Efficient Plane Extraction Approach for Bundle Adjustment on LiDAR
Point clouds [6.530864926156266]
We propose a novel and efficient voxel-based approach for plane extraction that is specially designed to provide point association for LiDAR bundle adjustment.
Our experimental results on HILTI demonstrate that our approach achieves the best precision and least time cost compared to other plane extraction methods.
arXiv Detail & Related papers (2023-04-29T15:47:29Z) - Grid-guided Neural Radiance Fields for Large Urban Scenes [146.06368329445857]
Recent approaches propose to geographically divide the scene and adopt multiple sub-NeRFs to model each region individually.
An alternative solution is to use a feature grid representation, which is computationally efficient and can naturally scale to a large scene.
We present a new framework that realizes high-fidelity rendering on large urban scenes while being computationally efficient.
arXiv Detail & Related papers (2023-03-24T13:56:45Z) - Ground Plane Matters: Picking Up Ground Plane Prior in Monocular 3D
Object Detection [92.75961303269548]
The ground plane prior is a very informative geometry clue in monocular 3D object detection (M3OD)
We propose a Ground Plane Enhanced Network (GPENet) which resolves both issues at one go.
Our GPENet can outperform other methods and achieve state-of-the-art performance, well demonstrating the effectiveness and the superiority of the proposed approach.
arXiv Detail & Related papers (2022-11-03T02:21:35Z) - PlaneDepth: Self-supervised Depth Estimation via Orthogonal Planes [41.517947010531074]
Multiple near frontal-parallel planes based depth estimation demonstrated impressive results in self-supervised monocular depth estimation (MDE)
We propose the PlaneDepth, a novel planes based presentation, including vertical planes and ground planes.
Our method can extract the ground plane in an unsupervised manner, which is important for autonomous driving.
arXiv Detail & Related papers (2022-10-04T13:51:59Z) - Patchwork: Concentric Zone-based Region-wise Ground Segmentation with
Ground Likelihood Estimation Using a 3D LiDAR Sensor [0.1657441317977376]
Ground segmentation is crucial for terrestrial mobile platforms to perform navigation or neighboring object recognition.
This paper presents a novel ground segmentation method called textitPatchwork, which is robust for addressing the under-segmentation problem.
arXiv Detail & Related papers (2021-08-12T06:52:10Z) - Accurate and Robust Scale Recovery for Monocular Visual Odometry Based
on Plane Geometry [7.169216737580712]
We develop a lightweight scale recovery framework leveraging an accurate and robust estimation of the ground plane.
Experiments on the KITTI dataset show that the proposed framework can achieve state-of-theart accuracy in terms of translation errors.
Due to the light-weight design, our framework also demonstrates a high frequency of 20Hz on the dataset.
arXiv Detail & Related papers (2021-01-15T07:21:24Z) - Reconfigurable Voxels: A New Representation for LiDAR-Based Point Clouds [76.52448276587707]
We propose Reconfigurable Voxels, a new approach to constructing representations from 3D point clouds.
Specifically, we devise a biased random walk scheme, which adaptively covers each neighborhood with a fixed number of voxels.
We find that this approach effectively improves the stability of voxel features, especially for sparse regions.
arXiv Detail & Related papers (2020-04-06T15:07:16Z) - From Planes to Corners: Multi-Purpose Primitive Detection in Unorganized
3D Point Clouds [59.98665358527686]
We propose a new method for segmentation-free joint estimation of orthogonal planes.
Such unified scene exploration allows for multitudes of applications such as semantic plane detection or local and global scan alignment.
Our experiments demonstrate the validity of our approach in numerous scenarios from wall detection to 6D tracking.
arXiv Detail & Related papers (2020-01-21T06:51:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.