CurveCloudNet: Processing Point Clouds with 1D Structure
- URL: http://arxiv.org/abs/2303.12050v2
- Date: Thu, 1 Feb 2024 22:22:17 GMT
- Title: CurveCloudNet: Processing Point Clouds with 1D Structure
- Authors: Colton Stearns and Davis Rempe and Jiateng Liu and Alex Fu and
Sebastien Mascha and Jeong Joon Park and Despoina Paschalidou and Leonidas J.
Guibas
- Abstract summary: We introduce a new point cloud processing scheme and backbone, called CurveCloudNet.
CurveCloudNet parameterizes the point cloud as a collection of polylines, establishing a local surface-aware ordering on the points.
We demonstrate that CurveCloudNet outperforms both point-based and sparse-voxel backbones in various segmentation settings.
- Score: 49.137477909835276
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Modern depth sensors such as LiDAR operate by sweeping laser-beams across the
scene, resulting in a point cloud with notable 1D curve-like structures. In
this work, we introduce a new point cloud processing scheme and backbone,
called CurveCloudNet, which takes advantage of the curve-like structure
inherent to these sensors. While existing backbones discard the rich 1D
traversal patterns and rely on generic 3D operations, CurveCloudNet
parameterizes the point cloud as a collection of polylines (dubbed a "curve
cloud"), establishing a local surface-aware ordering on the points. By
reasoning along curves, CurveCloudNet captures lightweight curve-aware priors
to efficiently and accurately reason in several diverse 3D environments. We
evaluate CurveCloudNet on multiple synthetic and real datasets that exhibit
distinct 3D size and structure. We demonstrate that CurveCloudNet outperforms
both point-based and sparse-voxel backbones in various segmentation settings,
notably scaling to large scenes better than point-based alternatives while
exhibiting improved single-object performance over sparse-voxel alternatives.
In all, CurveCloudNet is an efficient and accurate backbone that can handle a
larger variety of 3D environments than past works.
Related papers
- P2P-Bridge: Diffusion Bridges for 3D Point Cloud Denoising [81.92854168911704]
We tackle the task of point cloud denoising through a novel framework that adapts Diffusion Schr"odinger bridges to points clouds.
Experiments on object datasets show that P2P-Bridge achieves significant improvements over existing methods.
arXiv Detail & Related papers (2024-08-29T08:00:07Z) - Variational Relational Point Completion Network for Robust 3D
Classification [59.80993960827833]
Vari point cloud completion methods tend to generate global shape skeletons hence lack fine local details.
This paper proposes a variational framework, point Completion Network (VRCNet) with two appealing properties.
VRCNet shows great generalizability and robustness on real-world point cloud scans.
arXiv Detail & Related papers (2023-04-18T17:03:20Z) - PointAttN: You Only Need Attention for Point Cloud Completion [89.88766317412052]
Point cloud completion refers to completing 3D shapes from partial 3D point clouds.
We propose a novel neural network for processing point cloud in a per-point manner to eliminate kNNs.
The proposed framework, namely PointAttN, is simple, neat and effective, which can precisely capture the structural information of 3D shapes.
arXiv Detail & Related papers (2022-03-16T09:20:01Z) - A Conditional Point Diffusion-Refinement Paradigm for 3D Point Cloud
Completion [69.32451612060214]
Real-scanned 3D point clouds are often incomplete, and it is important to recover complete point clouds for downstream applications.
Most existing point cloud completion methods use Chamfer Distance (CD) loss for training.
We propose a novel Point Diffusion-Refinement (PDR) paradigm for point cloud completion.
arXiv Detail & Related papers (2021-12-07T06:59:06Z) - SSPU-Net: Self-Supervised Point Cloud Upsampling via Differentiable
Rendering [21.563862632172363]
We propose a self-supervised point cloud upsampling network (SSPU-Net) to generate dense point clouds without using ground truth.
To achieve this, we exploit the consistency between the input sparse point cloud and generated dense point cloud for the shapes and rendered images.
arXiv Detail & Related papers (2021-08-01T13:26:01Z) - The Devils in the Point Clouds: Studying the Robustness of Point Cloud
Convolutions [15.997907568429177]
This paper investigates different variants of PointConv, a convolution network on point clouds, to examine their robustness to input scale and rotation changes.
We derive a novel viewpoint-invariant descriptor by utilizing 3D geometric properties as the input to PointConv.
Experiments are conducted on the 2D MNIST & CIFAR-10 datasets as well as the 3D Semantic KITTI & ScanNet dataset.
arXiv Detail & Related papers (2021-01-19T19:32:38Z) - ParaNet: Deep Regular Representation for 3D Point Clouds [62.81379889095186]
ParaNet is a novel end-to-end deep learning framework for representing 3D point clouds.
It converts an irregular 3D point cloud into a regular 2D color image, named point geometry image (PGI)
In contrast to conventional regular representation modalities based on multi-view projection and voxelization, the proposed representation is differentiable and reversible.
arXiv Detail & Related papers (2020-12-05T13:19:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.