Differentiable Surface Rendering via Non-Differentiable Sampling
- URL: http://arxiv.org/abs/2108.04886v1
- Date: Tue, 10 Aug 2021 19:25:06 GMT
- Title: Differentiable Surface Rendering via Non-Differentiable Sampling
- Authors: Forrester Cole, Kyle Genova, Avneesh Sud, Daniel Vlasic, Zhoutong
Zhang
- Abstract summary: We present a method for differentiable rendering of 3D surfaces that supports both explicit and implicit representations.
We show for the first time efficient, differentiable rendering of an iso extracted from a neural radiance field (NeRF), and demonstrate surface-based, rather than volume-based, rendering of a NeRF.
- Score: 19.606523934811577
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present a method for differentiable rendering of 3D surfaces that supports
both explicit and implicit representations, provides derivatives at occlusion
boundaries, and is fast and simple to implement. The method first samples the
surface using non-differentiable rasterization, then applies differentiable,
depth-aware point splatting to produce the final image. Our approach requires
no differentiable meshing or rasterization steps, making it efficient for large
3D models and applicable to isosurfaces extracted from implicit surface
definitions. We demonstrate the effectiveness of our method for implicit-,
mesh-, and parametric-surface-based inverse rendering and neural-network
training applications. In particular, we show for the first time efficient,
differentiable rendering of an isosurface extracted from a neural radiance
field (NeRF), and demonstrate surface-based, rather than volume-based,
rendering of a NeRF.
Related papers
- A Simple Approach to Differentiable Rendering of SDFs [21.97043707520229]
We present a simple algorithm for differentiable rendering of surfaces represented by Signed Distance Fields (SDF)
Our method expands the lower-dimensional boundary integral into a thin band that is easy to sample when the underlying surface is represented by an SDF.
arXiv Detail & Related papers (2024-05-14T16:19:13Z) - NeuSD: Surface Completion with Multi-View Text-to-Image Diffusion [56.98287481620215]
We present a novel method for 3D surface reconstruction from multiple images where only a part of the object of interest is captured.
Our approach builds on two recent developments: surface reconstruction using neural radiance fields for the reconstruction of the visible parts of the surface, and guidance of pre-trained 2D diffusion models in the form of Score Distillation Sampling (SDS) to complete the shape in unobserved regions in a plausible manner.
arXiv Detail & Related papers (2023-12-07T19:30:55Z) - Delicate Textured Mesh Recovery from NeRF via Adaptive Surface
Refinement [78.48648360358193]
We present a novel framework that generates textured surface meshes from images.
Our approach begins by efficiently initializing the geometry and view-dependency appearance with a NeRF.
We jointly refine the appearance with geometry and bake it into texture images for real-time rendering.
arXiv Detail & Related papers (2023-03-03T17:14:44Z) - NeuralMeshing: Differentiable Meshing of Implicit Neural Representations [63.18340058854517]
We propose a novel differentiable meshing algorithm for extracting surface meshes from neural implicit representations.
Our method produces meshes with regular tessellation patterns and fewer triangle faces compared to existing methods.
arXiv Detail & Related papers (2022-10-05T16:52:25Z) - Representing 3D Shapes with Probabilistic Directed Distance Fields [7.528141488548544]
We develop a novel shape representation that allows fast differentiable rendering within an implicit architecture.
We show how to model inherent discontinuities in the underlying field.
We also apply our method to fitting single shapes, unpaired 3D-aware generative image modelling, and single-image 3D reconstruction tasks.
arXiv Detail & Related papers (2021-12-10T02:15:47Z) - DeepMesh: Differentiable Iso-Surface Extraction [53.77622255726208]
We introduce a differentiable way to produce explicit surface mesh representations from Deep Implicit Fields.
Our key insight is that by reasoning on how implicit field perturbations impact local surface geometry, one can ultimately differentiate the 3D location of surface samples.
We exploit this to define DeepMesh -- end-to-end differentiable mesh representation that can vary its topology.
arXiv Detail & Related papers (2021-06-20T20:12:41Z) - Shape As Points: A Differentiable Poisson Solver [118.12466580918172]
In this paper, we introduce a differentiable point-to-mesh layer using a differentiable formulation of Poisson Surface Reconstruction (PSR)
The differentiable PSR layer allows us to efficiently and differentiably bridge the explicit 3D point representation with the 3D mesh via the implicit indicator field.
Compared to neural implicit representations, our Shape-As-Points (SAP) model is more interpretable, lightweight, and accelerates inference time by one order of magnitude.
arXiv Detail & Related papers (2021-06-07T09:28:38Z) - Coupling Explicit and Implicit Surface Representations for Generative 3D
Modeling [41.79675639550555]
We propose a novel neural architecture for representing 3D surfaces, which harnesses two complementary shape representations.
We make these two representations synergistic by introducing novel consistency losses.
Our hybrid architecture outputs results are superior to the output of the two equivalent single-representation networks.
arXiv Detail & Related papers (2020-07-20T17:24:51Z) - MeshSDF: Differentiable Iso-Surface Extraction [45.769838982991736]
We introduce a differentiable way to produce explicit surface mesh representations from Deep Signed Distance Functions.
Our key insight is that by reasoning on how implicit field perturbations impact local surface geometry, one can ultimately differentiate the 3D location of surface samples.
We exploit this to define MeshSDF, an end-to-end differentiable mesh representation which can vary its topology.
arXiv Detail & Related papers (2020-06-06T23:44:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.