Probabilistic Directed Distance Fields for Ray-Based Shape Representations
- URL: http://arxiv.org/abs/2404.09081v1
- Date: Sat, 13 Apr 2024 21:02:49 GMT
- Title: Probabilistic Directed Distance Fields for Ray-Based Shape Representations
- Authors: Tristan Aumentado-Armstrong, Stavros Tsogkas, Sven Dickinson, Allan Jepson,
- Abstract summary: Directed Distance Fields (DDFs) are a novel neural shape representation that builds upon classical distance fields.
We show how to model inherent discontinuities in the underlying field.
We then apply DDFs to several applications, including single-shape fitting, generative modelling, and single-image 3D reconstruction.
- Score: 8.134429779950658
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In modern computer vision, the optimal representation of 3D shape continues to be task-dependent. One fundamental operation applied to such representations is differentiable rendering, as it enables inverse graphics approaches in learning frameworks. Standard explicit shape representations (voxels, point clouds, or meshes) are often easily rendered, but can suffer from limited geometric fidelity, among other issues. On the other hand, implicit representations (occupancy, distance, or radiance fields) preserve greater fidelity, but suffer from complex or inefficient rendering processes, limiting scalability. In this work, we devise Directed Distance Fields (DDFs), a novel neural shape representation that builds upon classical distance fields. The fundamental operation in a DDF maps an oriented point (position and direction) to surface visibility and depth. This enables efficient differentiable rendering, obtaining depth with a single forward pass per pixel, as well as differential geometric quantity extraction (e.g., surface normals), with only additional backward passes. Using probabilistic DDFs (PDDFs), we show how to model inherent discontinuities in the underlying field. We then apply DDFs to several applications, including single-shape fitting, generative modelling, and single-image 3D reconstruction, showcasing strong performance with simple architectural components via the versatility of our representation. Finally, since the dimensionality of DDFs permits view-dependent geometric artifacts, we conduct a theoretical investigation of the constraints necessary for view consistency. We find a small set of field properties that are sufficient to guarantee a DDF is consistent, without knowing, for instance, which shape the field is expressing.
Related papers
- DDF-HO: Hand-Held Object Reconstruction via Conditional Directed
Distance Field [82.81337273685176]
DDF-HO is a novel approach leveraging Directed Distance Field (DDF) as the shape representation.
We randomly sample multiple rays and collect local to global geometric features for them by introducing a novel 2D ray-based feature aggregation scheme.
Experiments on synthetic and real-world datasets demonstrate that DDF-HO consistently outperforms all baseline methods by a large margin.
arXiv Detail & Related papers (2023-08-16T09:06:32Z) - Shape, Pose, and Appearance from a Single Image via Bootstrapped
Radiance Field Inversion [54.151979979158085]
We introduce a principled end-to-end reconstruction framework for natural images, where accurate ground-truth poses are not available.
We leverage an unconditional 3D-aware generator, to which we apply a hybrid inversion scheme where a model produces a first guess of the solution.
Our framework can de-render an image in as few as 10 steps, enabling its use in practical scenarios.
arXiv Detail & Related papers (2022-11-21T17:42:42Z) - Multi-View Reconstruction using Signed Ray Distance Functions (SRDF) [22.75986869918975]
We investigate a new computational approach that builds on a novel shape representation that is volumetric.
The shape energy associated to this representation evaluates 3D geometry given color images and does not need appearance prediction.
In practice we propose an implicit shape representation, the SRDF, based on signed distances which we parameterize by depths along camera rays.
arXiv Detail & Related papers (2022-08-31T19:32:17Z) - NeuralODF: Learning Omnidirectional Distance Fields for 3D Shape
Representation [7.208066405543874]
In visual computing, 3D geometry is represented in many different forms including meshes, point clouds, voxel grids, level sets, and depth images.
We propose Omni Distance Fields (ODFs), a new 3D shape representation that encodes geometry by storing the depth to the object's surface from any 3D position in any viewing direction.
arXiv Detail & Related papers (2022-06-12T20:59:26Z) - Representing 3D Shapes with Probabilistic Directed Distance Fields [7.528141488548544]
We develop a novel shape representation that allows fast differentiable rendering within an implicit architecture.
We show how to model inherent discontinuities in the underlying field.
We also apply our method to fitting single shapes, unpaired 3D-aware generative image modelling, and single-image 3D reconstruction tasks.
arXiv Detail & Related papers (2021-12-10T02:15:47Z) - Gradient-SDF: A Semi-Implicit Surface Representation for 3D
Reconstruction [53.315347543761426]
Gradient-SDF is a novel representation for 3D geometry that combines the advantages of implict and explicit representations.
By storing at every voxel both the signed distance field as well as its gradient vector field, we enhance the capability of implicit representations.
We show that (1) the Gradient-SDF allows us to perform direct SDF tracking from depth images, using efficient storage schemes like hash maps, and that (2) the Gradient-SDF representation enables us to perform photometric bundle adjustment directly in a voxel representation.
arXiv Detail & Related papers (2021-11-26T18:33:14Z) - Deep Implicit Surface Point Prediction Networks [49.286550880464866]
Deep neural representations of 3D shapes as implicit functions have been shown to produce high fidelity models.
This paper presents a novel approach that models such surfaces using a new class of implicit representations called the closest surface-point (CSP) representation.
arXiv Detail & Related papers (2021-06-10T14:31:54Z) - Neural Geometric Level of Detail: Real-time Rendering with Implicit 3D
Shapes [77.6741486264257]
We introduce an efficient neural representation that, for the first time, enables real-time rendering of high-fidelity neural SDFs.
We show that our representation is 2-3 orders of magnitude more efficient in terms of rendering speed compared to previous works.
arXiv Detail & Related papers (2021-01-26T18:50:22Z) - DUDE: Deep Unsigned Distance Embeddings for Hi-Fidelity Representation
of Complex 3D Surfaces [8.104199886760275]
DUDE is a disentangled shape representation that utilizes an unsigned distance field (uDF) to represent proximity to a surface, and a normal vector field (nVF) to represent surface orientation.
We show that a combination of these two (uDF+nVF) can be used to learn high fidelity representations for arbitrary open/closed shapes.
arXiv Detail & Related papers (2020-11-04T22:49:05Z) - Neural Unsigned Distance Fields for Implicit Function Learning [53.241423815726925]
We propose Neural Distance Fields (NDF), a neural network based model which predicts the unsigned distance field for arbitrary 3D shapes.
NDF represent surfaces at high resolutions as prior implicit models, but do not require closed surface data.
NDF can be used for multi-target regression (multiple outputs for one input) with techniques that have been exclusively used for rendering in graphics.
arXiv Detail & Related papers (2020-10-26T22:49:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.