Shape As Points: A Differentiable Poisson Solver
- URL: http://arxiv.org/abs/2106.03452v1
- Date: Mon, 7 Jun 2021 09:28:38 GMT
- Title: Shape As Points: A Differentiable Poisson Solver
- Authors: Songyou Peng, Chiyu "Max" Jiang, Yiyi Liao, Michael Niemeyer, Marc
Pollefeys, Andreas Geiger
- Abstract summary: In this paper, we introduce a differentiable point-to-mesh layer using a differentiable formulation of Poisson Surface Reconstruction (PSR)
The differentiable PSR layer allows us to efficiently and differentiably bridge the explicit 3D point representation with the 3D mesh via the implicit indicator field.
Compared to neural implicit representations, our Shape-As-Points (SAP) model is more interpretable, lightweight, and accelerates inference time by one order of magnitude.
- Score: 118.12466580918172
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, neural implicit representations gained popularity in 3D
reconstruction due to their expressiveness and flexibility. However, the
implicit nature of neural implicit representations results in slow inference
time and requires careful initialization. In this paper, we revisit the classic
yet ubiquitous point cloud representation and introduce a differentiable
point-to-mesh layer using a differentiable formulation of Poisson Surface
Reconstruction (PSR) that allows for a GPU-accelerated fast solution of the
indicator function given an oriented point cloud. The differentiable PSR layer
allows us to efficiently and differentiably bridge the explicit 3D point
representation with the 3D mesh via the implicit indicator field, enabling
end-to-end optimization of surface reconstruction metrics such as Chamfer
distance. This duality between points and meshes hence allows us to represent
shapes as oriented point clouds, which are explicit, lightweight and
expressive. Compared to neural implicit representations, our Shape-As-Points
(SAP) model is more interpretable, lightweight, and accelerates inference time
by one order of magnitude. Compared to other explicit representations such as
points, patches, and meshes, SAP produces topology-agnostic, watertight
manifold surfaces. We demonstrate the effectiveness of SAP on the task of
surface reconstruction from unoriented point clouds and learning-based
reconstruction.
Related papers
- 3D Reconstruction with Fast Dipole Sums [12.865206085308728]
We introduce a method for high-quality 3D reconstruction from multiview images.
We represent implicit geometry and radiance fields as per-point attributes of a dense point cloud.
These queries facilitate the use of ray tracing to efficiently and differentiably render images.
arXiv Detail & Related papers (2024-05-27T03:23:25Z) - LISR: Learning Linear 3D Implicit Surface Representation Using Compactly
Supported Radial Basis Functions [5.056545768004376]
Implicit 3D surface reconstruction of an object from its partial and noisy 3D point cloud scan is the classical geometry processing and 3D computer vision problem.
We propose a neural network architecture for learning the linear implicit shape representation of the 3D surface of an object.
The proposed approach achieves better Chamfer distance and comparable F-score than the state-of-the-art approach on the benchmark dataset.
arXiv Detail & Related papers (2024-02-11T20:42:49Z) - Learning Smooth Neural Functions via Lipschitz Regularization [92.42667575719048]
We introduce a novel regularization designed to encourage smooth latent spaces in neural fields.
Compared with prior Lipschitz regularized networks, ours is computationally fast and can be implemented in four lines of code.
arXiv Detail & Related papers (2022-02-16T21:24:54Z) - Learning Modified Indicator Functions for Surface Reconstruction [10.413340575612233]
We propose a learning-based approach for implicit surface reconstruction from raw point clouds without normals.
Our method is inspired by Gauss Lemma in potential energy theory, which gives an explicit integral formula for the indicator functions.
We design a novel deep neural network to perform surface integral and learn the modified indicator functions from un-oriented and noisy point clouds.
arXiv Detail & Related papers (2021-11-18T05:30:35Z) - Deep Implicit Surface Point Prediction Networks [49.286550880464866]
Deep neural representations of 3D shapes as implicit functions have been shown to produce high fidelity models.
This paper presents a novel approach that models such surfaces using a new class of implicit representations called the closest surface-point (CSP) representation.
arXiv Detail & Related papers (2021-06-10T14:31:54Z) - Neural-Pull: Learning Signed Distance Functions from Point Clouds by
Learning to Pull Space onto Surfaces [68.12457459590921]
Reconstructing continuous surfaces from 3D point clouds is a fundamental operation in 3D geometry processing.
We introduce textitNeural-Pull, a new approach that is simple and leads to high quality SDFs.
arXiv Detail & Related papers (2020-11-26T23:18:10Z) - Learning Occupancy Function from Point Clouds for Surface Reconstruction [6.85316573653194]
Implicit function based surface reconstruction has been studied for a long time to recover 3D shapes from point clouds sampled from surfaces.
This paper proposes a novel method for learning occupancy functions from sparse point clouds and achieves better performance on challenging surface reconstruction tasks.
arXiv Detail & Related papers (2020-10-22T02:07:29Z) - Pseudo-LiDAR Point Cloud Interpolation Based on 3D Motion Representation
and Spatial Supervision [68.35777836993212]
We propose a Pseudo-LiDAR point cloud network to generate temporally and spatially high-quality point cloud sequences.
By exploiting the scene flow between point clouds, the proposed network is able to learn a more accurate representation of the 3D spatial motion relationship.
arXiv Detail & Related papers (2020-06-20T03:11:04Z) - Convolutional Occupancy Networks [88.48287716452002]
We propose Convolutional Occupancy Networks, a more flexible implicit representation for detailed reconstruction of objects and 3D scenes.
By combining convolutional encoders with implicit occupancy decoders, our model incorporates inductive biases, enabling structured reasoning in 3D space.
We empirically find that our method enables the fine-grained implicit 3D reconstruction of single objects, scales to large indoor scenes, and generalizes well from synthetic to real data.
arXiv Detail & Related papers (2020-03-10T10:17:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.