Neural Geometric Level of Detail: Real-time Rendering with Implicit 3D
Shapes
- URL: http://arxiv.org/abs/2101.10994v1
- Date: Tue, 26 Jan 2021 18:50:22 GMT
- Title: Neural Geometric Level of Detail: Real-time Rendering with Implicit 3D
Shapes
- Authors: Towaki Takikawa, Joey Litalien, Kangxue Yin, Karsten Kreis, Charles
Loop, Derek Nowrouzezahrai, Alec Jacobson, Morgan McGuire, Sanja Fidler
- Abstract summary: We introduce an efficient neural representation that, for the first time, enables real-time rendering of high-fidelity neural SDFs.
We show that our representation is 2-3 orders of magnitude more efficient in terms of rendering speed compared to previous works.
- Score: 77.6741486264257
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Neural signed distance functions (SDFs) are emerging as an effective
representation for 3D shapes. State-of-the-art methods typically encode the SDF
with a large, fixed-size neural network to approximate complex shapes with
implicit surfaces. Rendering with these large networks is, however,
computationally expensive since it requires many forward passes through the
network for every pixel, making these representations impractical for real-time
graphics. We introduce an efficient neural representation that, for the first
time, enables real-time rendering of high-fidelity neural SDFs, while achieving
state-of-the-art geometry reconstruction quality. We represent implicit
surfaces using an octree-based feature volume which adaptively fits shapes with
multiple discrete levels of detail (LODs), and enables continuous LOD with SDF
interpolation. We further develop an efficient algorithm to directly render our
novel neural SDF representation in real-time by querying only the necessary
LODs with sparse octree traversal. We show that our representation is 2-3
orders of magnitude more efficient in terms of rendering speed compared to
previous works. Furthermore, it produces state-of-the-art reconstruction
quality for complex shapes under both 3D geometric and 2D image-space metrics.
Related papers
- Optimizing 3D Geometry Reconstruction from Implicit Neural Representations [2.3940819037450987]
Implicit neural representations have emerged as a powerful tool in learning 3D geometry.
We present a novel approach that both reduces computational expenses and enhances the capture of fine details.
arXiv Detail & Related papers (2024-10-16T16:36:23Z) - SeMLaPS: Real-time Semantic Mapping with Latent Prior Networks and
Quasi-Planar Segmentation [53.83313235792596]
We present a new methodology for real-time semantic mapping from RGB-D sequences.
It combines a 2D neural network and a 3D network based on a SLAM system with 3D occupancy mapping.
Our system achieves state-of-the-art semantic mapping quality within 2D-3D networks-based systems.
arXiv Detail & Related papers (2023-06-28T22:36:44Z) - DiffusionSDF: Conditional Generative Modeling of Signed Distance
Functions [42.015077094731815]
DiffusionSDF is a generative model for shape completion, single-view reconstruction, and reconstruction of real-scanned point clouds.
We use neural signed distance functions (SDFs) as our 3D representation to parameterize the geometry of various signals (e.g., point clouds, 2D images) through neural networks.
arXiv Detail & Related papers (2022-11-24T18:59:01Z) - GraphCSPN: Geometry-Aware Depth Completion via Dynamic GCNs [49.55919802779889]
We propose a Graph Convolution based Spatial Propagation Network (GraphCSPN) as a general approach for depth completion.
In this work, we leverage convolution neural networks as well as graph neural networks in a complementary way for geometric representation learning.
Our method achieves the state-of-the-art performance, especially when compared in the case of using only a few propagation steps.
arXiv Detail & Related papers (2022-10-19T17:56:03Z) - FIRe: Fast Inverse Rendering using Directional and Signed Distance
Functions [97.5540646069663]
We introduce a novel neural scene representation that we call the directional distance function (DDF)
Our DDF is defined on the unit sphere and predicts the distance to the surface along any given direction.
Based on our DDF, we present a novel fast algorithm (FIRe) to reconstruct 3D shapes given a posed depth map.
arXiv Detail & Related papers (2022-03-30T13:24:04Z) - Implicit Neural Deformation for Multi-View Face Reconstruction [43.88676778013593]
We present a new method for 3D face reconstruction from multi-view RGB images.
Unlike previous methods which are built upon 3D morphable models, our method leverages an implicit representation to encode rich geometric features.
Our experimental results on several benchmark datasets demonstrate that our approach outperforms alternative baselines and achieves superior face reconstruction results compared to state-of-the-art methods.
arXiv Detail & Related papers (2021-12-05T07:02:53Z) - Deep Marching Tetrahedra: a Hybrid Representation for High-Resolution 3D
Shape Synthesis [90.26556260531707]
DMTet is a conditional generative model that can synthesize high-resolution 3D shapes using simple user guides such as coarse voxels.
Unlike deep 3D generative models that directly generate explicit representations such as meshes, our model can synthesize shapes with arbitrary topology.
arXiv Detail & Related papers (2021-11-08T05:29:35Z) - H3D-Net: Few-Shot High-Fidelity 3D Head Reconstruction [27.66008315400462]
Recent learning approaches that implicitly represent surface geometry have shown impressive results in the problem of multi-view 3D reconstruction.
We tackle these limitations for the specific problem of few-shot full 3D head reconstruction.
We learn a shape model of 3D heads from thousands of incomplete raw scans using implicit representations.
arXiv Detail & Related papers (2021-07-26T23:04:18Z) - Neural Parts: Learning Expressive 3D Shape Abstractions with Invertible
Neural Networks [118.20778308823779]
We present a novel 3D primitive representation that defines primitives using an Invertible Neural Network (INN)
Our model learns to parse 3D objects into semantically consistent part arrangements without any part-level supervision.
arXiv Detail & Related papers (2021-03-18T17:59:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.