Polylidar3D -- Fast Polygon Extraction from 3D Data
- URL: http://arxiv.org/abs/2007.12065v1
- Date: Thu, 23 Jul 2020 15:22:43 GMT
- Title: Polylidar3D -- Fast Polygon Extraction from 3D Data
- Authors: Jeremy Castagno, Ella Atkins
- Abstract summary: Flat surfaces captured by 3D point cloud processing are often used for localization, and modeling.
We demonstrate autonomous multi-th and speed segmentation for rooftop mapping, road surface detection, and RGBD cameras for wall detection.
Results consistently show excellent accuracy.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Flat surfaces captured by 3D point clouds are often used for localization,
mapping, and modeling. Dense point cloud processing has high computation and
memory costs making low-dimensional representations of flat surfaces such as
polygons desirable. We present Polylidar3D, a non-convex polygon extraction
algorithm which takes as input unorganized 3D point clouds (e.g., LiDAR data),
organized point clouds (e.g., range images), or user-provided meshes.
Non-convex polygons represent flat surfaces in an environment with interior
cutouts representing obstacles or holes. The Polylidar3D front-end transforms
input data into a half-edge triangular mesh. This representation provides a
common level of input data abstraction for subsequent back-end processing. The
Polylidar3D back-end is composed of four core algorithms: mesh smoothing,
dominant plane normal estimation, planar segment extraction, and finally
polygon extraction. Polylidar3D is shown to be quite fast, making use of CPU
multi-threading and GPU acceleration when available. We demonstrate
Polylidar3D's versatility and speed with real-world datasets including aerial
LiDAR point clouds for rooftop mapping, autonomous driving LiDAR point clouds
for road surface detection, and RGBD cameras for indoor floor/wall detection.
We also evaluate Polylidar3D on a challenging planar segmentation benchmark
dataset. Results consistently show excellent speed and accuracy.
Related papers
- DPPD: Deformable Polar Polygon Object Detection [3.9236649268347765]
We develop a novel Deformable Polar Polygon Object Detection method (DPPD) to detect objects in polygon shapes.
DPPD has been demonstrated successfully in various object detection tasks for autonomous driving.
arXiv Detail & Related papers (2023-04-05T06:43:41Z) - Scatter Points in Space: 3D Detection from Multi-view Monocular Images [8.71944437852952]
3D object detection from monocular image(s) is a challenging and long-standing problem of computer vision.
Recent methods tend to aggregate multiview feature by sampling regular 3D grid densely in space.
We propose a learnable keypoints sampling method, which scatters pseudo surface points in 3D space, in order to keep data sparsity.
arXiv Detail & Related papers (2022-08-31T09:38:05Z) - PolyNet: Polynomial Neural Network for 3D Shape Recognition with
PolyShape Representation [51.147664305955495]
3D shape representation and its processing have substantial effects on 3D shape recognition.
We propose a deep neural network-based method (PolyNet) and a specific polygon representation (PolyShape)
Our experiments demonstrate the strength and the advantages of PolyNet on both 3D shape classification and retrieval tasks.
arXiv Detail & Related papers (2021-10-15T06:45:59Z) - Cylindrical and Asymmetrical 3D Convolution Networks for LiDAR-based
Perception [122.53774221136193]
State-of-the-art methods for driving-scene LiDAR-based perception often project the point clouds to 2D space and then process them via 2D convolution.
A natural remedy is to utilize the 3D voxelization and 3D convolution network.
We propose a new framework for the outdoor LiDAR segmentation, where cylindrical partition and asymmetrical 3D convolution networks are designed to explore the 3D geometric pattern.
arXiv Detail & Related papers (2021-09-12T06:25:11Z) - From Multi-View to Hollow-3D: Hallucinated Hollow-3D R-CNN for 3D Object
Detection [101.20784125067559]
We propose a new architecture, namely Hallucinated Hollow-3D R-CNN, to address the problem of 3D object detection.
In our approach, we first extract the multi-view features by sequentially projecting the point clouds into the perspective view and the bird-eye view.
The 3D objects are detected via a box refinement module with a novel Hierarchical Voxel RoI Pooling operation.
arXiv Detail & Related papers (2021-07-30T02:00:06Z) - PC-DAN: Point Cloud based Deep Affinity Network for 3D Multi-Object
Tracking (Accepted as an extended abstract in JRDB-ACT Workshop at CVPR21) [68.12101204123422]
A point cloud is a dense compilation of spatial data in 3D coordinates.
We propose a PointNet-based approach for 3D Multi-Object Tracking (MOT)
arXiv Detail & Related papers (2021-06-03T05:36:39Z) - Exploring Deep 3D Spatial Encodings for Large-Scale 3D Scene
Understanding [19.134536179555102]
We propose an alternative approach to overcome the limitations of CNN based approaches by encoding the spatial features of raw 3D point clouds into undirected graph models.
The proposed method achieves on par state-of-the-art accuracy with improved training time and model stability thus indicating strong potential for further research.
arXiv Detail & Related papers (2020-11-29T12:56:19Z) - Cylinder3D: An Effective 3D Framework for Driving-scene LiDAR Semantic
Segmentation [87.54570024320354]
State-of-the-art methods for large-scale driving-scene LiDAR semantic segmentation often project and process the point clouds in the 2D space.
A straightforward solution to tackle the issue of 3D-to-2D projection is to keep the 3D representation and process the points in the 3D space.
We develop a 3D cylinder partition and a 3D cylinder convolution based framework, termed as Cylinder3D, which exploits the 3D topology relations and structures of driving-scene point clouds.
arXiv Detail & Related papers (2020-08-04T13:56:19Z) - KAPLAN: A 3D Point Descriptor for Shape Completion [80.15764700137383]
KAPLAN is a 3D point descriptor that aggregates local shape information via a series of 2D convolutions.
In each of those planes, point properties like normals or point-to-plane distances are aggregated into a 2D grid and abstracted into a feature representation with an efficient 2D convolutional encoder.
Experiments on public datasets show that KAPLAN achieves state-of-the-art performance for 3D shape completion.
arXiv Detail & Related papers (2020-07-31T21:56:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.