Meshing Point Clouds with Predicted Intrinsic-Extrinsic Ratio Guidance
- URL: http://arxiv.org/abs/2007.09267v2
- Date: Wed, 30 Sep 2020 17:49:28 GMT
- Title: Meshing Point Clouds with Predicted Intrinsic-Extrinsic Ratio Guidance
- Authors: Minghua Liu, Xiaoshuai Zhang, Hao Su
- Abstract summary: We propose to leverage the input point cloud as much as possible, by only adding connectivity information to existing points.
Our key innovation is a surrogate of local connectivity, calculated by comparing the intrinsic/extrinsic metrics.
We demonstrate that our method can not only preserve details, handle ambiguous structures, but also possess strong generalizability to unseen categories.
- Score: 30.863194319818223
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We are interested in reconstructing the mesh representation of object
surfaces from point clouds. Surface reconstruction is a prerequisite for
downstream applications such as rendering, collision avoidance for planning,
animation, etc. However, the task is challenging if the input point cloud has a
low resolution, which is common in real-world scenarios (e.g., from LiDAR or
Kinect sensors). Existing learning-based mesh generative methods mostly predict
the surface by first building a shape embedding that is at the whole object
level, a design that causes issues in generating fine-grained details and
generalizing to unseen categories. Instead, we propose to leverage the input
point cloud as much as possible, by only adding connectivity information to
existing points. Particularly, we predict which triplets of points should form
faces. Our key innovation is a surrogate of local connectivity, calculated by
comparing the intrinsic/extrinsic metrics. We learn to predict this surrogate
using a deep point cloud network and then feed it to an efficient
post-processing module for high-quality mesh generation. We demonstrate that
our method can not only preserve details, handle ambiguous structures, but also
possess strong generalizability to unseen categories by experiments on
synthetic and real data. The code is available at
https://github.com/Colin97/Point2Mesh.
Related papers
- GeoMAE: Masked Geometric Target Prediction for Self-supervised Point
Cloud Pre-Training [16.825524577372473]
We introduce a point cloud representation learning framework, based on geometric feature reconstruction.
We identify three self-supervised learning objectives to peculiar point clouds, namely centroid prediction, normal estimation, and curvature prediction.
Our pipeline is conceptually simple and it consists of two major steps: first, it randomly masks out groups of points, followed by a Transformer-based point cloud encoder.
arXiv Detail & Related papers (2023-05-15T17:14:55Z) - Reconstructing Surfaces for Sparse Point Clouds with On-Surface Priors [52.25114448281418]
Current methods are able to reconstruct surfaces by learning Signed Distance Functions (SDFs) from single point clouds without ground truth signed distances or point normals.
We propose to reconstruct highly accurate surfaces from sparse point clouds with an on-surface prior.
Our method can learn SDFs from a single sparse point cloud without ground truth signed distances or point normals.
arXiv Detail & Related papers (2022-04-22T09:45:20Z) - Deep Surface Reconstruction from Point Clouds with Visibility
Information [66.05024551590812]
We present two simple ways to augment raw point clouds with visibility information, so it can directly be leveraged by surface reconstruction networks with minimal adaptation.
Our proposed modifications consistently improve the accuracy of generated surfaces as well as the generalization ability of the networks to unseen shape domains.
arXiv Detail & Related papers (2022-02-03T19:33:47Z) - POCO: Point Convolution for Surface Reconstruction [92.22371813519003]
Implicit neural networks have been successfully used for surface reconstruction from point clouds.
Many of them face scalability issues as they encode the isosurface function of a whole object or scene into a single latent vector.
We propose to use point cloud convolutions and compute latent vectors at each input point.
arXiv Detail & Related papers (2022-01-05T21:26:18Z) - OMNet: Learning Overlapping Mask for Partial-to-Partial Point Cloud
Registration [31.108056345511976]
OMNet is a global feature based iterative network for partial-to-partial point cloud registration.
We learn masks in a coarse-to-fine manner to reject non-overlapping regions, which converting the partial-to-partial registration to the registration of the same shapes.
arXiv Detail & Related papers (2021-03-01T11:59:59Z) - RfD-Net: Point Scene Understanding by Semantic Instance Reconstruction [19.535169371240073]
We introduce RfD-Net that jointly detects and reconstructs dense object surfaces directly from point clouds.
We decouple the instance reconstruction into global object localization and local shape prediction.
Our approach consistently outperforms the state-of-the-arts and improves over 11 of mesh IoU in object reconstruction.
arXiv Detail & Related papers (2020-11-30T12:58:05Z) - Refinement of Predicted Missing Parts Enhance Point Cloud Completion [62.997667081978825]
Point cloud completion is the task of predicting complete geometry from partial observations using a point set representation for a 3D shape.
Previous approaches propose neural networks to directly estimate the whole point cloud through encoder-decoder models fed by the incomplete point set.
This paper proposes an end-to-end neural network architecture that focuses on computing the missing geometry and merging the known input and the predicted point cloud.
arXiv Detail & Related papers (2020-10-08T22:01:23Z) - TearingNet: Point Cloud Autoencoder to Learn Topology-Friendly
Representations [20.318695890515613]
We propose an autoencoder, TearingNet, which tackles the challenging task of representing point clouds using a fixed-length descriptor.
Our TearingNet is characterized by a proposed Tearing network module and a Folding network module interacting with each other iteratively.
Experimentation shows the superiority of our proposal in terms of reconstructing point clouds as well as generating more topology-friendly representations than benchmarks.
arXiv Detail & Related papers (2020-06-17T22:42:43Z) - GRNet: Gridding Residual Network for Dense Point Cloud Completion [54.43648460932248]
Estimating the complete 3D point cloud from an incomplete one is a key problem in many vision and robotics applications.
We propose a novel Gridding Residual Network (GRNet) for point cloud completion.
Experimental results indicate that the proposed GRNet performs favorably against state-of-the-art methods on the ShapeNet, Completion3D, and KITTI benchmarks.
arXiv Detail & Related papers (2020-06-06T02:46:39Z) - Point2Mesh: A Self-Prior for Deformable Meshes [83.31236364265403]
We introduce Point2Mesh, a technique for reconstructing a surface mesh from an input point cloud.
The self-prior encapsulates reoccurring geometric repetitions from a single shape within the weights of a deep neural network.
We show that Point2Mesh converges to a desirable solution; compared to a prescribed smoothness prior, which often becomes trapped in undesirable local minima.
arXiv Detail & Related papers (2020-05-22T10:01:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.