Learning Occupancy Function from Point Clouds for Surface Reconstruction
- URL: http://arxiv.org/abs/2010.11378v1
- Date: Thu, 22 Oct 2020 02:07:29 GMT
- Title: Learning Occupancy Function from Point Clouds for Surface Reconstruction
- Authors: Meng Jia and Matthew Kyan
- Abstract summary: Implicit function based surface reconstruction has been studied for a long time to recover 3D shapes from point clouds sampled from surfaces.
This paper proposes a novel method for learning occupancy functions from sparse point clouds and achieves better performance on challenging surface reconstruction tasks.
- Score: 6.85316573653194
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Implicit function based surface reconstruction has been studied for a long
time to recover 3D shapes from point clouds sampled from surfaces. Recently,
Signed Distance Functions (SDFs) and Occupany Functions are adopted in
learning-based shape reconstruction methods as implicit 3D shape
representation. This paper proposes a novel method for learning occupancy
functions from sparse point clouds and achieves better performance on
challenging surface reconstruction tasks. Unlike the previous methods, which
predict point occupancy with fully-connected multi-layer networks, we adapt the
point cloud deep learning architecture, Point Convolution Neural Network
(PCNN), to build our learning model. Specifically, we create a sampling
operator and insert it into PCNN to continuously sample the feature space at
the points where occupancy states need to be predicted. This method natively
obtains point cloud data's geometric nature, and it's invariant to point
permutation. Our occupancy function learning can be easily fit into procedures
of point cloud up-sampling and surface reconstruction. Our experiments show
state-of-the-art performance for reconstructing With ShapeNet dataset and
demonstrate this method's well-generalization by testing it with McGill 3D
dataset \cite{siddiqi2008retrieving}. Moreover, we find the learned occupancy
function is relatively more rotation invariant than previous shape learning
methods.
Related papers
- Learning Unsigned Distance Fields from Local Shape Functions for 3D Surface Reconstruction [42.840655419509346]
This paper presents a novel neural framework, LoSF-UDF, for reconstructing surfaces from 3D point clouds by leveraging local shape functions to learn UDFs.
We observe that 3D shapes manifest simple patterns within localized areas, prompting us to create a training dataset of point cloud patches.
Our approach learns features within a specific radius around each query point and utilizes an attention mechanism to focus on the crucial features for UDF estimation.
arXiv Detail & Related papers (2024-07-01T14:39:03Z) - Human as Points: Explicit Point-based 3D Human Reconstruction from
Single-view RGB Images [78.56114271538061]
We introduce an explicit point-based human reconstruction framework called HaP.
Our approach is featured by fully-explicit point cloud estimation, manipulation, generation, and refinement in the 3D geometric space.
Our results may indicate a paradigm rollback to the fully-explicit and geometry-centric algorithm design.
arXiv Detail & Related papers (2023-11-06T05:52:29Z) - GeoUDF: Surface Reconstruction from 3D Point Clouds via Geometry-guided
Distance Representation [73.77505964222632]
We present a learning-based method, namely GeoUDF, to tackle the problem of reconstructing a discrete surface from a sparse point cloud.
To be specific, we propose a geometry-guided learning method for UDF and its gradient estimation.
To extract triangle meshes from the predicted UDF, we propose a customized edge-based marching cube module.
arXiv Detail & Related papers (2022-11-30T06:02:01Z) - CAP-UDF: Learning Unsigned Distance Functions Progressively from Raw Point Clouds with Consistency-Aware Field Optimization [54.69408516025872]
CAP-UDF is a novel method to learn consistency-aware UDF from raw point clouds.
We train a neural network to gradually infer the relationship between queries and the approximated surface.
We also introduce a polygonization algorithm to extract surfaces using the gradients of the learned UDF.
arXiv Detail & Related papers (2022-10-06T08:51:08Z) - Learning Modified Indicator Functions for Surface Reconstruction [10.413340575612233]
We propose a learning-based approach for implicit surface reconstruction from raw point clouds without normals.
Our method is inspired by Gauss Lemma in potential energy theory, which gives an explicit integral formula for the indicator functions.
We design a novel deep neural network to perform surface integral and learn the modified indicator functions from un-oriented and noisy point clouds.
arXiv Detail & Related papers (2021-11-18T05:30:35Z) - Shape As Points: A Differentiable Poisson Solver [118.12466580918172]
In this paper, we introduce a differentiable point-to-mesh layer using a differentiable formulation of Poisson Surface Reconstruction (PSR)
The differentiable PSR layer allows us to efficiently and differentiably bridge the explicit 3D point representation with the 3D mesh via the implicit indicator field.
Compared to neural implicit representations, our Shape-As-Points (SAP) model is more interpretable, lightweight, and accelerates inference time by one order of magnitude.
arXiv Detail & Related papers (2021-06-07T09:28:38Z) - Deep Implicit Moving Least-Squares Functions for 3D Reconstruction [23.8586965588835]
In this work, we turn the discrete point sets into smooth surfaces by introducing the well-known implicit moving least-squares (IMLS) surface formulation.
We incorporate IMLS surface generation into deep neural networks for inheriting both the flexibility of point sets and the high quality of implicit surfaces.
Our experiments on 3D object reconstruction demonstrate that IMLSNets outperform state-of-the-art learning-based methods in terms of reconstruction quality and computational efficiency.
arXiv Detail & Related papers (2021-03-23T02:26:07Z) - RfD-Net: Point Scene Understanding by Semantic Instance Reconstruction [19.535169371240073]
We introduce RfD-Net that jointly detects and reconstructs dense object surfaces directly from point clouds.
We decouple the instance reconstruction into global object localization and local shape prediction.
Our approach consistently outperforms the state-of-the-arts and improves over 11 of mesh IoU in object reconstruction.
arXiv Detail & Related papers (2020-11-30T12:58:05Z) - Neural-Pull: Learning Signed Distance Functions from Point Clouds by
Learning to Pull Space onto Surfaces [68.12457459590921]
Reconstructing continuous surfaces from 3D point clouds is a fundamental operation in 3D geometry processing.
We introduce textitNeural-Pull, a new approach that is simple and leads to high quality SDFs.
arXiv Detail & Related papers (2020-11-26T23:18:10Z) - Pix2Surf: Learning Parametric 3D Surface Models of Objects from Images [64.53227129573293]
We investigate the problem of learning to generate 3D parametric surface representations for novel object instances, as seen from one or more views.
We design neural networks capable of generating high-quality parametric 3D surfaces which are consistent between views.
Our method is supervised and trained on a public dataset of shapes from common object categories.
arXiv Detail & Related papers (2020-08-18T06:33:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.