FIRe: Fast Inverse Rendering using Directional and Signed Distance
Functions
- URL: http://arxiv.org/abs/2203.16284v3
- Date: Tue, 19 Dec 2023 09:12:35 GMT
- Title: FIRe: Fast Inverse Rendering using Directional and Signed Distance
Functions
- Authors: Tarun Yenamandra and Ayush Tewari and Nan Yang and Florian Bernard and
Christian Theobalt and Daniel Cremers
- Abstract summary: We introduce a novel neural scene representation that we call the directional distance function (DDF)
Our DDF is defined on the unit sphere and predicts the distance to the surface along any given direction.
Based on our DDF, we present a novel fast algorithm (FIRe) to reconstruct 3D shapes given a posed depth map.
- Score: 97.5540646069663
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Neural 3D implicit representations learn priors that are useful for diverse
applications, such as single- or multiple-view 3D reconstruction. A major
downside of existing approaches while rendering an image is that they require
evaluating the network multiple times per camera ray so that the high
computational time forms a bottleneck for downstream applications. We address
this problem by introducing a novel neural scene representation that we call
the directional distance function (DDF). To this end, we learn a signed
distance function (SDF) along with our DDF model to represent a class of
shapes. Specifically, our DDF is defined on the unit sphere and predicts the
distance to the surface along any given direction. Therefore, our DDF allows
rendering images with just a single network evaluation per camera ray. Based on
our DDF, we present a novel fast algorithm (FIRe) to reconstruct 3D shapes
given a posed depth map. We evaluate our proposed method on 3D reconstruction
from single-view depth images, where we empirically show that our algorithm
reconstructs 3D shapes more accurately and it is more than 15 times faster (per
iteration) than competing methods.
Related papers
- Weakly-Supervised 3D Reconstruction of Clothed Humans via Normal Maps [1.6462601662291156]
We present a novel deep learning-based approach to the 3D reconstruction of clothed humans using weak supervision via 2D normal maps.
Given a single RGB image or multiview images, our network infers a signed distance function (SDF) discretized on a tetrahedral mesh surrounding the body in a rest pose.
We demonstrate the efficacy of our approach for both network inference and 3D reconstruction.
arXiv Detail & Related papers (2023-11-27T18:06:35Z) - ConRad: Image Constrained Radiance Fields for 3D Generation from a
Single Image [15.997195076224312]
We present a novel method for reconstructing 3D objects from a single RGB image.
Our method leverages the latest image generation models to infer the hidden 3D structure.
We show that our 3D reconstructions remain more faithful to the input and produce more consistent 3D models.
arXiv Detail & Related papers (2023-11-09T09:17:10Z) - iSDF: Real-Time Neural Signed Distance Fields for Robot Perception [64.80458128766254]
iSDF is a continuous learning system for real-time signed distance field reconstruction.
It produces more accurate reconstructions and better approximations of collision costs and gradients.
arXiv Detail & Related papers (2022-04-05T15:48:39Z) - Implicit Neural Deformation for Multi-View Face Reconstruction [43.88676778013593]
We present a new method for 3D face reconstruction from multi-view RGB images.
Unlike previous methods which are built upon 3D morphable models, our method leverages an implicit representation to encode rich geometric features.
Our experimental results on several benchmark datasets demonstrate that our approach outperforms alternative baselines and achieves superior face reconstruction results compared to state-of-the-art methods.
arXiv Detail & Related papers (2021-12-05T07:02:53Z) - Neural Geometric Level of Detail: Real-time Rendering with Implicit 3D
Shapes [77.6741486264257]
We introduce an efficient neural representation that, for the first time, enables real-time rendering of high-fidelity neural SDFs.
We show that our representation is 2-3 orders of magnitude more efficient in terms of rendering speed compared to previous works.
arXiv Detail & Related papers (2021-01-26T18:50:22Z) - Neural-Pull: Learning Signed Distance Functions from Point Clouds by
Learning to Pull Space onto Surfaces [68.12457459590921]
Reconstructing continuous surfaces from 3D point clouds is a fundamental operation in 3D geometry processing.
We introduce textitNeural-Pull, a new approach that is simple and leads to high quality SDFs.
arXiv Detail & Related papers (2020-11-26T23:18:10Z) - Pix2Vox++: Multi-scale Context-aware 3D Object Reconstruction from
Single and Multiple Images [56.652027072552606]
We propose a novel framework for single-view and multi-view 3D object reconstruction, named Pix2Vox++.
By using a well-designed encoder-decoder, it generates a coarse 3D volume from each input image.
A multi-scale context-aware fusion module is then introduced to adaptively select high-quality reconstructions for different parts from all coarse 3D volumes to obtain a fused 3D volume.
arXiv Detail & Related papers (2020-06-22T13:48:09Z) - Lightweight Multi-View 3D Pose Estimation through Camera-Disentangled
Representation [57.11299763566534]
We present a solution to recover 3D pose from multi-view images captured with spatially calibrated cameras.
We exploit 3D geometry to fuse input images into a unified latent representation of pose, which is disentangled from camera view-points.
Our architecture then conditions the learned representation on camera projection operators to produce accurate per-view 2d detections.
arXiv Detail & Related papers (2020-04-05T12:52:29Z) - Atlas: End-to-End 3D Scene Reconstruction from Posed Images [13.154808583020229]
We present an end-to-end 3D reconstruction method for a scene by directly regressing a truncated signed distance function (TSDF) from a set of posed RGB images.
A 2D CNN extracts features from each image independently which are then back-projected and accumulated into a voxel volume.
A 3D CNN refines the accumulated features and predicts the TSDF values.
arXiv Detail & Related papers (2020-03-23T17:59:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.