ParSeNet: A Parametric Surface Fitting Network for 3D Point Clouds
- URL: http://arxiv.org/abs/2003.12181v5
- Date: Tue, 22 Sep 2020 16:05:16 GMT
- Title: ParSeNet: A Parametric Surface Fitting Network for 3D Point Clouds
- Authors: Gopal Sharma, Difan Liu, Subhransu Maji, Evangelos Kalogerakis,
Siddhartha Chaudhuri, Radom\'ir M\v{e}ch
- Abstract summary: We propose a novel, end-to-end trainable, deep network called ParSeNet that decomposes a 3D point cloud into parametric surface patches.
ParSeNet is trained on a large-scale dataset of man-made 3D shapes and captures high-level semantic priors for shape decomposition.
- Score: 40.52124782103019
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a novel, end-to-end trainable, deep network called ParSeNet that
decomposes a 3D point cloud into parametric surface patches, including B-spline
patches as well as basic geometric primitives. ParSeNet is trained on a
large-scale dataset of man-made 3D shapes and captures high-level semantic
priors for shape decomposition. It handles a much richer class of primitives
than prior work, and allows us to represent surfaces with higher fidelity. It
also produces repeatable and robust parametrizations of a surface compared to
purely geometric approaches. We present extensive experiments to validate our
approach against analytical and learning-based alternatives. Our source code is
publicly available at: https://hippogriff.github.io/parsenet.
Related papers
- Unsupervised Inference of Signed Distance Functions from Single Sparse
Point Clouds without Learning Priors [54.966603013209685]
It is vital to infer signed distance functions (SDFs) from 3D point clouds.
We present a neural network to directly infer SDFs from single sparse point clouds.
arXiv Detail & Related papers (2023-03-25T15:56:50Z) - Parameter is Not All You Need: Starting from Non-Parametric Networks for
3D Point Cloud Analysis [51.0695452455959]
We present a Non-parametric Network for 3D point cloud analysis, Point-NN, which consists of purely non-learnable components.
Surprisingly, it performs well on various 3D tasks, requiring no parameters or training, and even surpasses existing fully trained models.
arXiv Detail & Related papers (2023-03-14T17:59:02Z) - POCO: Point Convolution for Surface Reconstruction [92.22371813519003]
Implicit neural networks have been successfully used for surface reconstruction from point clouds.
Many of them face scalability issues as they encode the isosurface function of a whole object or scene into a single latent vector.
We propose to use point cloud convolutions and compute latent vectors at each input point.
arXiv Detail & Related papers (2022-01-05T21:26:18Z) - ParaNet: Deep Regular Representation for 3D Point Clouds [62.81379889095186]
ParaNet is a novel end-to-end deep learning framework for representing 3D point clouds.
It converts an irregular 3D point cloud into a regular 2D color image, named point geometry image (PGI)
In contrast to conventional regular representation modalities based on multi-view projection and voxelization, the proposed representation is differentiable and reversible.
arXiv Detail & Related papers (2020-12-05T13:19:55Z) - PIE-NET: Parametric Inference of Point Cloud Edges [40.27043782820615]
We introduce an end-to-end learnable technique to robustly identify feature edges in 3D point cloud data.
Our deep neural network, coined PIE-NET, is trained for parametric inference of edges.
arXiv Detail & Related papers (2020-07-09T15:35:10Z) - PUGeo-Net: A Geometry-centric Network for 3D Point Cloud Upsampling [103.09504572409449]
We propose a novel deep neural network based method, called PUGeo-Net, to generate uniform dense point clouds.
Thanks to its geometry-centric nature, PUGeo-Net works well for both CAD models with sharp features and scanned models with rich geometric details.
arXiv Detail & Related papers (2020-02-24T14:13:29Z) - Hypernetwork approach to generating point clouds [18.67883065951206]
We build a hyper network that returns weights of a particular neural network trained to map points into a 3D shape.
A particular 3D shape can be generated using point-by-point sampling from the assumed prior distribution.
Since the hyper network is based on an auto-encoder architecture trained to reconstruct realistic 3D shapes, the target network weights can be considered a parametrization of the surface of a 3D shape.
arXiv Detail & Related papers (2020-02-10T11:09:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.