Representing 3D Shapes with Probabilistic Directed Distance Fields
- URL: http://arxiv.org/abs/2112.05300v1
- Date: Fri, 10 Dec 2021 02:15:47 GMT
- Title: Representing 3D Shapes with Probabilistic Directed Distance Fields
- Authors: Tristan Aumentado-Armstrong, Stavros Tsogkas, Sven Dickinson, Allan
Jepson
- Abstract summary: We develop a novel shape representation that allows fast differentiable rendering within an implicit architecture.
We show how to model inherent discontinuities in the underlying field.
We also apply our method to fitting single shapes, unpaired 3D-aware generative image modelling, and single-image 3D reconstruction tasks.
- Score: 7.528141488548544
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Differentiable rendering is an essential operation in modern vision, allowing
inverse graphics approaches to 3D understanding to be utilized in modern
machine learning frameworks. Explicit shape representations (voxels, point
clouds, or meshes), while relatively easily rendered, often suffer from limited
geometric fidelity or topological constraints. On the other hand, implicit
representations (occupancy, distance, or radiance fields) preserve greater
fidelity, but suffer from complex or inefficient rendering processes, limiting
scalability. In this work, we endeavour to address both shortcomings with a
novel shape representation that allows fast differentiable rendering within an
implicit architecture. Building on implicit distance representations, we define
Directed Distance Fields (DDFs), which map an oriented point (position and
direction) to surface visibility and depth. Such a field can render a depth map
with a single forward pass per pixel, enable differential surface geometry
extraction (e.g., surface normals and curvatures) via network derivatives, be
easily composed, and permit extraction of classical unsigned distance fields.
Using probabilistic DDFs (PDDFs), we show how to model inherent discontinuities
in the underlying field. Finally, we apply our method to fitting single shapes,
unpaired 3D-aware generative image modelling, and single-image 3D
reconstruction tasks, showcasing strong performance with simple architectural
components via the versatility of our representation.
Related papers
- Probabilistic Directed Distance Fields for Ray-Based Shape Representations [8.134429779950658]
Directed Distance Fields (DDFs) are a novel neural shape representation that builds upon classical distance fields.
We show how to model inherent discontinuities in the underlying field.
We then apply DDFs to several applications, including single-shape fitting, generative modelling, and single-image 3D reconstruction.
arXiv Detail & Related papers (2024-04-13T21:02:49Z) - Shape, Pose, and Appearance from a Single Image via Bootstrapped
Radiance Field Inversion [54.151979979158085]
We introduce a principled end-to-end reconstruction framework for natural images, where accurate ground-truth poses are not available.
We leverage an unconditional 3D-aware generator, to which we apply a hybrid inversion scheme where a model produces a first guess of the solution.
Our framework can de-render an image in as few as 10 steps, enabling its use in practical scenarios.
arXiv Detail & Related papers (2022-11-21T17:42:42Z) - Facial Geometric Detail Recovery via Implicit Representation [147.07961322377685]
We present a robust texture-guided geometric detail recovery approach using only a single in-the-wild facial image.
Our method combines high-quality texture completion with the powerful expressiveness of implicit surfaces.
Our method not only recovers accurate facial details but also decomposes normals, albedos, and shading parts in a self-supervised way.
arXiv Detail & Related papers (2022-03-18T01:42:59Z) - Deep Marching Tetrahedra: a Hybrid Representation for High-Resolution 3D
Shape Synthesis [90.26556260531707]
DMTet is a conditional generative model that can synthesize high-resolution 3D shapes using simple user guides such as coarse voxels.
Unlike deep 3D generative models that directly generate explicit representations such as meshes, our model can synthesize shapes with arbitrary topology.
arXiv Detail & Related papers (2021-11-08T05:29:35Z) - DeepMesh: Differentiable Iso-Surface Extraction [53.77622255726208]
We introduce a differentiable way to produce explicit surface mesh representations from Deep Implicit Fields.
Our key insight is that by reasoning on how implicit field perturbations impact local surface geometry, one can ultimately differentiate the 3D location of surface samples.
We exploit this to define DeepMesh -- end-to-end differentiable mesh representation that can vary its topology.
arXiv Detail & Related papers (2021-06-20T20:12:41Z) - Deep Implicit Surface Point Prediction Networks [49.286550880464866]
Deep neural representations of 3D shapes as implicit functions have been shown to produce high fidelity models.
This paper presents a novel approach that models such surfaces using a new class of implicit representations called the closest surface-point (CSP) representation.
arXiv Detail & Related papers (2021-06-10T14:31:54Z) - Neural Geometric Level of Detail: Real-time Rendering with Implicit 3D
Shapes [77.6741486264257]
We introduce an efficient neural representation that, for the first time, enables real-time rendering of high-fidelity neural SDFs.
We show that our representation is 2-3 orders of magnitude more efficient in terms of rendering speed compared to previous works.
arXiv Detail & Related papers (2021-01-26T18:50:22Z) - MeshSDF: Differentiable Iso-Surface Extraction [45.769838982991736]
We introduce a differentiable way to produce explicit surface mesh representations from Deep Signed Distance Functions.
Our key insight is that by reasoning on how implicit field perturbations impact local surface geometry, one can ultimately differentiate the 3D location of surface samples.
We exploit this to define MeshSDF, an end-to-end differentiable mesh representation which can vary its topology.
arXiv Detail & Related papers (2020-06-06T23:44:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.