DSMNet: Deep High-precision 3D Surface Modeling from Sparse Point Cloud
Frames
- URL: http://arxiv.org/abs/2304.04200v1
- Date: Sun, 9 Apr 2023 09:23:06 GMT
- Title: DSMNet: Deep High-precision 3D Surface Modeling from Sparse Point Cloud
Frames
- Authors: Changjie Qiu, Zhiyong Wang, Xiuhong Lin, Yu Zang, Cheng Wang, Weiquan
Liu
- Abstract summary: Existing point cloud modeling datasets express the modeling precision by pose or trajectory precision rather than the point cloud modeling effect itself.
We propose a novel learning-based joint framework, DSMNet, for high-precision 3D surface modeling from sparse point cloud frames.
- Score: 12.531880335603145
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing point cloud modeling datasets primarily express the modeling
precision by pose or trajectory precision rather than the point cloud modeling
effect itself. Under this demand, we first independently construct a set of
LiDAR system with an optical stage, and then we build a HPMB dataset based on
the constructed LiDAR system, a High-Precision, Multi-Beam, real-world dataset.
Second, we propose an modeling evaluation method based on HPMB for object-level
modeling to overcome this limitation. In addition, the existing point cloud
modeling methods tend to generate continuous skeletons of the global
environment, hence lacking attention to the shape of complex objects. To tackle
this challenge, we propose a novel learning-based joint framework, DSMNet, for
high-precision 3D surface modeling from sparse point cloud frames. DSMNet
comprises density-aware Point Cloud Registration (PCR) and geometry-aware Point
Cloud Sampling (PCS) to effectively learn the implicit structure feature of
sparse point clouds. Extensive experiments demonstrate that DSMNet outperforms
the state-of-the-art methods in PCS and PCR on Multi-View Partial Point Cloud
(MVP) database. Furthermore, the experiments on the open source KITTI and our
proposed HPMB datasets show that DSMNet can be generalized as a post-processing
of Simultaneous Localization And Mapping (SLAM), thereby improving modeling
precision in environments with sparse point clouds.
Related papers
- Point Cloud Mamba: Point Cloud Learning via State Space Model [73.7454734756626]
We show that Mamba-based point cloud methods can outperform previous methods based on transformer or multi-layer perceptrons (MLPs)
In particular, we demonstrate that Mamba-based point cloud methods can outperform previous methods based on transformer or multi-layer perceptrons (MLPs)
Point Cloud Mamba surpasses the state-of-the-art (SOTA) point-based method PointNeXt and achieves new SOTA performance on the ScanNN, ModelNet40, ShapeNetPart, and S3DIS datasets.
arXiv Detail & Related papers (2024-03-01T18:59:03Z) - ModelNet-O: A Large-Scale Synthetic Dataset for Occlusion-Aware Point
Cloud Classification [28.05358017259757]
We propose ModelNet-O, a large-scale synthetic dataset of 123,041 samples.
ModelNet-O emulates real-world point clouds with self-occlusion caused by scanning from monocular cameras.
We propose a robust point cloud processing method called PointMLS.
arXiv Detail & Related papers (2024-01-16T08:54:21Z) - PointeNet: A Lightweight Framework for Effective and Efficient Point
Cloud Analysis [28.54939134635978]
PointeNet is a network designed specifically for point cloud analysis.
Our method demonstrates flexibility by seamlessly integrating with a classification/segmentation head or embedding into off-the-shelf 3D object detection networks.
Experiments on object-level datasets, including ModelNet40, ScanObjectNN, ShapeNet KITTI, and the scene-level dataset KITTI, demonstrate the superior performance of PointeNet over state-of-the-art methods in point cloud analysis.
arXiv Detail & Related papers (2023-12-20T03:34:48Z) - Variational Relational Point Completion Network for Robust 3D
Classification [59.80993960827833]
Vari point cloud completion methods tend to generate global shape skeletons hence lack fine local details.
This paper proposes a variational framework, point Completion Network (VRCNet) with two appealing properties.
VRCNet shows great generalizability and robustness on real-world point cloud scans.
arXiv Detail & Related papers (2023-04-18T17:03:20Z) - StarNet: Style-Aware 3D Point Cloud Generation [82.30389817015877]
StarNet is able to reconstruct and generate high-fidelity and even 3D point clouds using a mapping network.
Our framework achieves comparable state-of-the-art performance on various metrics in the point cloud reconstruction and generation tasks.
arXiv Detail & Related papers (2023-03-28T08:21:44Z) - Controllable Mesh Generation Through Sparse Latent Point Diffusion
Models [105.83595545314334]
We design a novel sparse latent point diffusion model for mesh generation.
Our key insight is to regard point clouds as an intermediate representation of meshes, and model the distribution of point clouds instead.
Our proposed sparse latent point diffusion model achieves superior performance in terms of generation quality and controllability.
arXiv Detail & Related papers (2023-03-14T14:25:29Z) - PointPatchMix: Point Cloud Mixing with Patch Scoring [58.58535918705736]
We propose PointPatchMix, which mixes point clouds at the patch level and generates content-based targets for mixed point clouds.
Our approach preserves local features at the patch level, while the patch scoring module assigns targets based on the content-based significance score from a pre-trained teacher model.
With Point-MAE as our baseline, our model surpasses previous methods by a significant margin, achieving 86.3% accuracy on ScanObjectNN and 94.1% accuracy on ModelNet40.
arXiv Detail & Related papers (2023-03-12T14:49:42Z) - HSurf-Net: Normal Estimation for 3D Point Clouds by Learning Hyper
Surfaces [54.77683371400133]
We propose a novel normal estimation method called HSurf-Net, which can accurately predict normals from point clouds with noise and density variations.
Experimental results show that our HSurf-Net achieves the state-of-the-art performance on the synthetic shape dataset.
arXiv Detail & Related papers (2022-10-13T16:39:53Z) - Flow-based GAN for 3D Point Cloud Generation from a Single Image [16.04710129379503]
We introduce a hybrid explicit-implicit generative modeling scheme, which inherits the flow-based explicit generative models for sampling point clouds with arbitrary resolutions.
We evaluate on the large-scale synthetic dataset ShapeNet, with the experimental results demonstrating the superior performance of the proposed method.
arXiv Detail & Related papers (2022-10-08T17:58:20Z) - Point Cloud based Hierarchical Deep Odometry Estimation [3.058685580689605]
We propose a deep model that learns to estimate odometry in driving scenarios using point cloud data.
The proposed model consumes raw point clouds in order to extract frame-to-frame odometry estimation.
arXiv Detail & Related papers (2021-03-05T00:17:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.