POCO: Point Convolution for Surface Reconstruction
- URL: http://arxiv.org/abs/2201.01831v1
- Date: Wed, 5 Jan 2022 21:26:18 GMT
- Title: POCO: Point Convolution for Surface Reconstruction
- Authors: Alexandre Boulch, Renaud Marlet
- Abstract summary: Implicit neural networks have been successfully used for surface reconstruction from point clouds.
Many of them face scalability issues as they encode the isosurface function of a whole object or scene into a single latent vector.
We propose to use point cloud convolutions and compute latent vectors at each input point.
- Score: 92.22371813519003
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Implicit neural networks have been successfully used for surface
reconstruction from point clouds. However, many of them face scalability issues
as they encode the isosurface function of a whole object or scene into a single
latent vector. To overcome this limitation, a few approaches infer latent
vectors on a coarse regular 3D grid or on 3D patches, and interpolate them to
answer occupancy queries. In doing so, they loose the direct connection with
the input points sampled on the surface of objects, and they attach information
uniformly in space rather than where it matters the most, i.e., near the
surface. Besides, relying on fixed patch sizes may require discretization
tuning. To address these issues, we propose to use point cloud convolutions and
compute latent vectors at each input point. We then perform a learning-based
interpolation on nearest neighbors using inferred weights. Experiments on both
object and scene datasets show that our approach significantly outperforms
other methods on most classical metrics, producing finer details and better
reconstructing thinner volumes. The code is available at
https://github.com/valeoai/POCO.
Related papers
- GridPull: Towards Scalability in Learning Implicit Representations from
3D Point Clouds [60.27217859189727]
We propose GridPull to improve the efficiency of learning implicit representations from large scale point clouds.
Our novelty lies in the fast inference of a discrete distance field defined on grids without using any neural components.
We use uniform grids for a fast grid search to localize sampled queries, and organize surface points in a tree structure to speed up the calculation of distances to the surface.
arXiv Detail & Related papers (2023-08-25T04:52:52Z) - Quadric Representations for LiDAR Odometry, Mapping and Localization [93.24140840537912]
Current LiDAR odometry, mapping and localization methods leverage point-wise representations of 3D scenes.
We propose a novel method of describing scenes using quadric surfaces, which are far more compact representations of 3D objects.
Our method maintains low latency and memory utility while achieving competitive, and even superior, accuracy.
arXiv Detail & Related papers (2023-04-27T13:52:01Z) - Unsupervised Inference of Signed Distance Functions from Single Sparse
Point Clouds without Learning Priors [54.966603013209685]
It is vital to infer signed distance functions (SDFs) from 3D point clouds.
We present a neural network to directly infer SDFs from single sparse point clouds.
arXiv Detail & Related papers (2023-03-25T15:56:50Z) - Reconstructing Surfaces for Sparse Point Clouds with On-Surface Priors [52.25114448281418]
Current methods are able to reconstruct surfaces by learning Signed Distance Functions (SDFs) from single point clouds without ground truth signed distances or point normals.
We propose to reconstruct highly accurate surfaces from sparse point clouds with an on-surface prior.
Our method can learn SDFs from a single sparse point cloud without ground truth signed distances or point normals.
arXiv Detail & Related papers (2022-04-22T09:45:20Z) - Stratified Transformer for 3D Point Cloud Segmentation [89.9698499437732]
Stratified Transformer is able to capture long-range contexts and demonstrates strong generalization ability and high performance.
To combat the challenges posed by irregular point arrangements, we propose first-layer point embedding to aggregate local information.
Experiments demonstrate the effectiveness and superiority of our method on S3DIS, ScanNetv2 and ShapeNetPart datasets.
arXiv Detail & Related papers (2022-03-28T05:35:16Z) - LatticeNet: Fast Spatio-Temporal Point Cloud Segmentation Using
Permutohedral Lattices [27.048998326468688]
Deep convolutional neural networks (CNNs) have shown outstanding performance in the task of semantically segmenting images.
Here, we propose LatticeNet, a novel approach for 3D semantic segmentation, which takes raw point clouds as input.
We present results of 3D segmentation on multiple datasets where our method achieves state-of-the-art performance.
arXiv Detail & Related papers (2021-08-09T10:17:27Z) - Mapping of Sparse 3D Data using Alternating Projection [35.735398244213584]
We propose a novel technique to register sparse 3D scans in the absence of texture.
Existing methods such as KinectFusion heavily rely on dense point clouds.
We propose the use of a two-step alternating projection algorithm by formulating the registration as the simultaneous satisfaction of intersection and rigidity constraints.
arXiv Detail & Related papers (2020-10-04T17:40:30Z) - Meshing Point Clouds with Predicted Intrinsic-Extrinsic Ratio Guidance [30.863194319818223]
We propose to leverage the input point cloud as much as possible, by only adding connectivity information to existing points.
Our key innovation is a surrogate of local connectivity, calculated by comparing the intrinsic/extrinsic metrics.
We demonstrate that our method can not only preserve details, handle ambiguous structures, but also possess strong generalizability to unseen categories.
arXiv Detail & Related papers (2020-07-17T22:36:00Z) - ParSeNet: A Parametric Surface Fitting Network for 3D Point Clouds [40.52124782103019]
We propose a novel, end-to-end trainable, deep network called ParSeNet that decomposes a 3D point cloud into parametric surface patches.
ParSeNet is trained on a large-scale dataset of man-made 3D shapes and captures high-level semantic priors for shape decomposition.
arXiv Detail & Related papers (2020-03-26T22:54:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.