PUGeo-Net: A Geometry-centric Network for 3D Point Cloud Upsampling
- URL: http://arxiv.org/abs/2002.10277v2
- Date: Sat, 7 Mar 2020 16:02:05 GMT
- Title: PUGeo-Net: A Geometry-centric Network for 3D Point Cloud Upsampling
- Authors: Yue Qian, Junhui Hou, Sam Kwong, Ying He
- Abstract summary: We propose a novel deep neural network based method, called PUGeo-Net, to generate uniform dense point clouds.
Thanks to its geometry-centric nature, PUGeo-Net works well for both CAD models with sharp features and scanned models with rich geometric details.
- Score: 103.09504572409449
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper addresses the problem of generating uniform dense point clouds to
describe the underlying geometric structures from given sparse point clouds.
Due to the irregular and unordered nature, point cloud densification as a
generative task is challenging. To tackle the challenge, we propose a novel
deep neural network based method, called PUGeo-Net, that learns a $3\times 3$
linear transformation matrix $\bf T$ for each input point. Matrix $\mathbf T$
approximates the augmented Jacobian matrix of a local parameterization and
builds a one-to-one correspondence between the 2D parametric domain and the 3D
tangent plane so that we can lift the adaptively distributed 2D samples (which
are also learned from data) to 3D space. After that, we project the samples to
the curved surface by computing a displacement along the normal of the tangent
plane. PUGeo-Net is fundamentally different from the existing deep learning
methods that are largely motivated by the image super-resolution techniques and
generate new points in the abstract feature space. Thanks to its
geometry-centric nature, PUGeo-Net works well for both CAD models with sharp
features and scanned models with rich geometric details. Moreover, PUGeo-Net
can compute the normal for the original and generated points, which is highly
desired by the surface reconstruction algorithms. Computational results show
that PUGeo-Net, the first neural network that can jointly generate vertex
coordinates and normals, consistently outperforms the state-of-the-art in terms
of accuracy and efficiency for upsampling factor $4\sim 16$.
Related papers
- ParaPoint: Learning Global Free-Boundary Surface Parameterization of 3D Point Clouds [52.03819676074455]
ParaPoint is an unsupervised neural learning pipeline for achieving global free-boundary surface parameterization.
This work makes the first attempt to investigate neural point cloud parameterization that pursues both global mappings and free boundaries.
arXiv Detail & Related papers (2024-03-15T14:35:05Z) - Oriented-grid Encoder for 3D Implicit Representations [10.02138130221506]
This paper is the first to exploit 3D characteristics in 3D geometric encoders explicitly.
Our method gets state-of-the-art results when compared to the prior techniques.
arXiv Detail & Related papers (2024-02-09T19:28:13Z) - Polyhedral Surface: Self-supervised Point Cloud Reconstruction Based on
Polyhedral Surface [14.565612328814312]
We propose a novel polyhedral surface to represent local surface.
It does not require any local coordinate system, which is important when introducing neural networks.
Our method achieves state-of-the-art results on three commonly used networks.
arXiv Detail & Related papers (2023-10-23T04:24:31Z) - Flattening-Net: Deep Regular 2D Representation for 3D Point Cloud
Analysis [66.49788145564004]
We present an unsupervised deep neural architecture called Flattening-Net to represent irregular 3D point clouds of arbitrary geometry and topology.
Our methods perform favorably against the current state-of-the-art competitors.
arXiv Detail & Related papers (2022-12-17T15:05:25Z) - GeoUDF: Surface Reconstruction from 3D Point Clouds via Geometry-guided
Distance Representation [73.77505964222632]
We present a learning-based method, namely GeoUDF, to tackle the problem of reconstructing a discrete surface from a sparse point cloud.
To be specific, we propose a geometry-guided learning method for UDF and its gradient estimation.
To extract triangle meshes from the predicted UDF, we propose a customized edge-based marching cube module.
arXiv Detail & Related papers (2022-11-30T06:02:01Z) - SketchSampler: Sketch-based 3D Reconstruction via View-dependent Depth
Sampling [75.957103837167]
Reconstructing a 3D shape based on a single sketch image is challenging due to the large domain gap between a sparse, irregular sketch and a regular, dense 3D shape.
Existing works try to employ the global feature extracted from sketch to directly predict the 3D coordinates, but they usually suffer from losing fine details that are not faithful to the input sketch.
arXiv Detail & Related papers (2022-08-14T16:37:51Z) - Laplacian2Mesh: Laplacian-Based Mesh Understanding [4.808061174740482]
We introduce a novel and flexible convolutional neural network (CNN) model, called Laplacian2Mesh, for 3D triangle mesh.
Mesh pooling is applied to expand the receptive field of the network by the multi-space transformation of Laplacian.
Experiments on various learning tasks applied to 3D meshes demonstrate the effectiveness and efficiency of Laplacian2Mesh.
arXiv Detail & Related papers (2022-02-01T10:10:13Z) - Learning Geometry-Disentangled Representation for Complementary
Understanding of 3D Object Point Cloud [50.56461318879761]
We propose Geometry-Disentangled Attention Network (GDANet) for 3D image processing.
GDANet disentangles point clouds into contour and flat part of 3D objects, respectively denoted by sharp and gentle variation components.
Experiments on 3D object classification and segmentation benchmarks demonstrate that GDANet achieves the state-of-the-arts with fewer parameters.
arXiv Detail & Related papers (2020-12-20T13:35:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.