Coupling Explicit and Implicit Surface Representations for Generative 3D
Modeling
- URL: http://arxiv.org/abs/2007.10294v2
- Date: Sat, 17 Oct 2020 02:10:58 GMT
- Title: Coupling Explicit and Implicit Surface Representations for Generative 3D
Modeling
- Authors: Omid Poursaeed and Matthew Fisher and Noam Aigerman and Vladimir G.
Kim
- Abstract summary: We propose a novel neural architecture for representing 3D surfaces, which harnesses two complementary shape representations.
We make these two representations synergistic by introducing novel consistency losses.
Our hybrid architecture outputs results are superior to the output of the two equivalent single-representation networks.
- Score: 41.79675639550555
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a novel neural architecture for representing 3D surfaces, which
harnesses two complementary shape representations: (i) an explicit
representation via an atlas, i.e., embeddings of 2D domains into 3D; (ii) an
implicit-function representation, i.e., a scalar function over the 3D volume,
with its levels denoting surfaces. We make these two representations
synergistic by introducing novel consistency losses that ensure that the
surface created from the atlas aligns with the level-set of the implicit
function. Our hybrid architecture outputs results which are superior to the
output of the two equivalent single-representation networks, yielding smoother
explicit surfaces with more accurate normals, and a more accurate implicit
occupancy function. Additionally, our surface reconstruction step can directly
leverage the explicit atlas-based representation. This process is
computationally efficient, and can be directly used by differentiable
rasterizers, enabling training our hybrid representation with image-based
losses.
Related papers
- LISR: Learning Linear 3D Implicit Surface Representation Using Compactly
Supported Radial Basis Functions [5.056545768004376]
Implicit 3D surface reconstruction of an object from its partial and noisy 3D point cloud scan is the classical geometry processing and 3D computer vision problem.
We propose a neural network architecture for learning the linear implicit shape representation of the 3D surface of an object.
The proposed approach achieves better Chamfer distance and comparable F-score than the state-of-the-art approach on the benchmark dataset.
arXiv Detail & Related papers (2024-02-11T20:42:49Z) - Sur2f: A Hybrid Representation for High-Quality and Efficient Surface
Reconstruction from Multi-view Images [41.81291587750352]
Multi-view surface reconstruction is an ill-posed, inverse problem in 3D vision research.
Most of the existing methods rely either on explicit meshes, or on implicit field functions, using volume rendering of the fields for reconstruction.
We propose a new hybrid representation, termed Sur2f, aiming to better benefit from both representations in a complementary manner.
arXiv Detail & Related papers (2024-01-08T07:22:59Z) - Neural Vector Fields: Implicit Representation by Explicit Learning [63.337294707047036]
We propose a novel 3D representation method, Neural Vector Fields (NVF)
It not only adopts the explicit learning process to manipulate meshes directly, but also the implicit representation of unsigned distance functions (UDFs)
Our method first predicts displacement queries towards the surface and models shapes as text reconstructions.
arXiv Detail & Related papers (2023-03-08T02:36:09Z) - Deep Marching Tetrahedra: a Hybrid Representation for High-Resolution 3D
Shape Synthesis [90.26556260531707]
DMTet is a conditional generative model that can synthesize high-resolution 3D shapes using simple user guides such as coarse voxels.
Unlike deep 3D generative models that directly generate explicit representations such as meshes, our model can synthesize shapes with arbitrary topology.
arXiv Detail & Related papers (2021-11-08T05:29:35Z) - Differentiable Surface Rendering via Non-Differentiable Sampling [19.606523934811577]
We present a method for differentiable rendering of 3D surfaces that supports both explicit and implicit representations.
We show for the first time efficient, differentiable rendering of an iso extracted from a neural radiance field (NeRF), and demonstrate surface-based, rather than volume-based, rendering of a NeRF.
arXiv Detail & Related papers (2021-08-10T19:25:06Z) - Deep Implicit Surface Point Prediction Networks [49.286550880464866]
Deep neural representations of 3D shapes as implicit functions have been shown to produce high fidelity models.
This paper presents a novel approach that models such surfaces using a new class of implicit representations called the closest surface-point (CSP) representation.
arXiv Detail & Related papers (2021-06-10T14:31:54Z) - Shape As Points: A Differentiable Poisson Solver [118.12466580918172]
In this paper, we introduce a differentiable point-to-mesh layer using a differentiable formulation of Poisson Surface Reconstruction (PSR)
The differentiable PSR layer allows us to efficiently and differentiably bridge the explicit 3D point representation with the 3D mesh via the implicit indicator field.
Compared to neural implicit representations, our Shape-As-Points (SAP) model is more interpretable, lightweight, and accelerates inference time by one order of magnitude.
arXiv Detail & Related papers (2021-06-07T09:28:38Z) - Neural Geometric Level of Detail: Real-time Rendering with Implicit 3D
Shapes [77.6741486264257]
We introduce an efficient neural representation that, for the first time, enables real-time rendering of high-fidelity neural SDFs.
We show that our representation is 2-3 orders of magnitude more efficient in terms of rendering speed compared to previous works.
arXiv Detail & Related papers (2021-01-26T18:50:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.