3D Shapes Local Geometry Codes Learning with SDF
- URL: http://arxiv.org/abs/2108.08593v1
- Date: Thu, 19 Aug 2021 09:56:03 GMT
- Title: 3D Shapes Local Geometry Codes Learning with SDF
- Authors: Shun Yao, Fei Yang, Yongmei Cheng, Mikhail G. Mozerov
- Abstract summary: A signed distance function (SDF) as the 3D shape description is one of the most effective approaches to represent 3D geometry for rendering and reconstruction.
In this paper, we consider the degeneration problem of reconstruction coming from the capacity decrease of the DeepSDF model.
We propose Local Geometry Code Learning (LGCL), a model that improves the original DeepSDF results by learning from a local shape geometry.
- Score: 8.37542758486152
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A signed distance function (SDF) as the 3D shape description is one of the
most effective approaches to represent 3D geometry for rendering and
reconstruction. Our work is inspired by the state-of-the-art method DeepSDF
that learns and analyzes the 3D shape as the iso-surface of its shell and this
method has shown promising results especially in the 3D shape reconstruction
and compression domain. In this paper, we consider the degeneration problem of
reconstruction coming from the capacity decrease of the DeepSDF model, which
approximates the SDF with a neural network and a single latent code. We propose
Local Geometry Code Learning (LGCL), a model that improves the original DeepSDF
results by learning from a local shape geometry of the full 3D shape. We add an
extra graph neural network to split the single transmittable latent code into a
set of local latent codes distributed on the 3D shape. Mentioned latent codes
are used to approximate the SDF in their local regions, which will alleviate
the complexity of the approximation compared to the original DeepSDF.
Furthermore, we introduce a new geometric loss function to facilitate the
training of these local latent codes. Note that other local shape adjusting
methods use the 3D voxel representation, which in turn is a problem highly
difficult to solve or even is insolvable. In contrast, our architecture is
based on graph processing implicitly and performs the learning regression
process directly in the latent code space, thus make the proposed architecture
more flexible and also simple for realization. Our experiments on 3D shape
reconstruction demonstrate that our LGCL method can keep more details with a
significantly smaller size of the SDF decoder and outperforms considerably the
original DeepSDF method under the most important quantitative metrics.
Related papers
- Few-Shot Unsupervised Implicit Neural Shape Representation Learning with Spatial Adversaries [8.732260277121547]
Implicit Neural Representations have gained prominence as a powerful framework for capturing complex data modalities.
Within the realm of 3D shape representation, Neural Signed Distance Functions (SDF) have demonstrated remarkable potential in faithfully encoding intricate shape geometry.
arXiv Detail & Related papers (2024-08-27T14:54:33Z) - gSDF: Geometry-Driven Signed Distance Functions for 3D Hand-Object
Reconstruction [94.46581592405066]
We exploit the hand structure and use it as guidance for SDF-based shape reconstruction.
We predict kinematic chains of pose transformations and align SDFs with highly-articulated hand poses.
arXiv Detail & Related papers (2023-04-24T10:05:48Z) - FineRecon: Depth-aware Feed-forward Network for Detailed 3D
Reconstruction [13.157400338544177]
Recent works on 3D reconstruction from posed images have demonstrated that direct inference of scene-level 3D geometry is feasible using deep neural networks.
We propose three effective solutions for improving the fidelity of inference-based 3D reconstructions.
Our method, FineRecon, produces smooth and highly accurate reconstructions, showing significant improvements across multiple depth and 3D reconstruction metrics.
arXiv Detail & Related papers (2023-04-04T02:50:29Z) - Self-Supervised Geometry-Aware Encoder for Style-Based 3D GAN Inversion [115.82306502822412]
StyleGAN has achieved great progress in 2D face reconstruction and semantic editing via image inversion and latent editing.
A corresponding generic 3D GAN inversion framework is still missing, limiting the applications of 3D face reconstruction and semantic editing.
We study the challenging problem of 3D GAN inversion where a latent code is predicted given a single face image to faithfully recover its 3D shapes and detailed textures.
arXiv Detail & Related papers (2022-12-14T18:49:50Z) - NeuralUDF: Learning Unsigned Distance Fields for Multi-view
Reconstruction of Surfaces with Arbitrary Topologies [87.06532943371575]
We present a novel method, called NeuralUDF, for reconstructing surfaces with arbitrary topologies from 2D images via volume rendering.
In this paper, we propose to represent surfaces as the Unsigned Distance Function (UDF) and develop a new volume rendering scheme to learn the neural UDF representation.
arXiv Detail & Related papers (2022-11-25T15:21:45Z) - FIRe: Fast Inverse Rendering using Directional and Signed Distance
Functions [97.5540646069663]
We introduce a novel neural scene representation that we call the directional distance function (DDF)
Our DDF is defined on the unit sphere and predicts the distance to the surface along any given direction.
Based on our DDF, we present a novel fast algorithm (FIRe) to reconstruct 3D shapes given a posed depth map.
arXiv Detail & Related papers (2022-03-30T13:24:04Z) - High-fidelity 3D Model Compression based on Key Spheres [6.59007277780362]
We propose an SDF prediction network using explicit key spheres as input.
Our method achieves the high-fidelity and high-compression 3D object coding and reconstruction.
arXiv Detail & Related papers (2022-01-19T09:21:54Z) - Deep Marching Tetrahedra: a Hybrid Representation for High-Resolution 3D
Shape Synthesis [90.26556260531707]
DMTet is a conditional generative model that can synthesize high-resolution 3D shapes using simple user guides such as coarse voxels.
Unlike deep 3D generative models that directly generate explicit representations such as meshes, our model can synthesize shapes with arbitrary topology.
arXiv Detail & Related papers (2021-11-08T05:29:35Z) - Neural Geometric Level of Detail: Real-time Rendering with Implicit 3D
Shapes [77.6741486264257]
We introduce an efficient neural representation that, for the first time, enables real-time rendering of high-fidelity neural SDFs.
We show that our representation is 2-3 orders of magnitude more efficient in terms of rendering speed compared to previous works.
arXiv Detail & Related papers (2021-01-26T18:50:22Z) - A Simple and Scalable Shape Representation for 3D Reconstruction [22.826897662839357]
We show that we can obtain high quality 3D reconstruction using a linear decoder, obtained from principal component analysis on the signed distance function (SDF) of the surface.
This approach allows easily scaling to larger resolutions.
arXiv Detail & Related papers (2020-05-10T10:22:50Z) - Deep Local Shapes: Learning Local SDF Priors for Detailed 3D
Reconstruction [19.003819911951297]
We introduce Deep Local Shapes (DeepLS), a deep shape representation that enables encoding and reconstruction of high-quality 3D shapes without prohibitive memory requirements.
Unlike DeepSDF, which represents an object-level SDF with a neural network and a single latent code, we store a grid of independent latent codes, each responsible for storing information about surfaces in a small local neighborhood.
This decomposition of scenes into local shapes simplifies the prior distribution that the network must learn, and also enables efficient inference.
arXiv Detail & Related papers (2020-03-24T17:21:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.