Deep Local Shapes: Learning Local SDF Priors for Detailed 3D
Reconstruction
- URL: http://arxiv.org/abs/2003.10983v3
- Date: Fri, 21 Aug 2020 21:52:09 GMT
- Title: Deep Local Shapes: Learning Local SDF Priors for Detailed 3D
Reconstruction
- Authors: Rohan Chabra, Jan Eric Lenssen, Eddy Ilg, Tanner Schmidt, Julian
Straub, Steven Lovegrove, Richard Newcombe
- Abstract summary: We introduce Deep Local Shapes (DeepLS), a deep shape representation that enables encoding and reconstruction of high-quality 3D shapes without prohibitive memory requirements.
Unlike DeepSDF, which represents an object-level SDF with a neural network and a single latent code, we store a grid of independent latent codes, each responsible for storing information about surfaces in a small local neighborhood.
This decomposition of scenes into local shapes simplifies the prior distribution that the network must learn, and also enables efficient inference.
- Score: 19.003819911951297
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Efficiently reconstructing complex and intricate surfaces at scale is a
long-standing goal in machine perception. To address this problem we introduce
Deep Local Shapes (DeepLS), a deep shape representation that enables encoding
and reconstruction of high-quality 3D shapes without prohibitive memory
requirements. DeepLS replaces the dense volumetric signed distance function
(SDF) representation used in traditional surface reconstruction systems with a
set of locally learned continuous SDFs defined by a neural network, inspired by
recent work such as DeepSDF. Unlike DeepSDF, which represents an object-level
SDF with a neural network and a single latent code, we store a grid of
independent latent codes, each responsible for storing information about
surfaces in a small local neighborhood. This decomposition of scenes into local
shapes simplifies the prior distribution that the network must learn, and also
enables efficient inference. We demonstrate the effectiveness and
generalization power of DeepLS by showing object shape encoding and
reconstructions of full scenes, where DeepLS delivers high compression,
accuracy, and local shape completion.
Related papers
- A Latent Implicit 3D Shape Model for Multiple Levels of Detail [95.56814217356667]
Implicit neural representations map a shape-specific latent code and a 3D coordinate to its corresponding signed distance (SDF) value.
This approach only offers a single level of detail.
We propose a new shape modeling approach, which enables multiple levels of detail and guarantees a smooth surface at each level.
arXiv Detail & Related papers (2024-09-10T05:57:58Z) - HIO-SDF: Hierarchical Incremental Online Signed Distance Fields [26.263670265735858]
A good representation of a large, complex mobile robot workspace must be space-efficient yet capable of encoding relevant geometric details.
We introduce HIO-SDF, a new method that represents the environment as a Signed Distance Field (SDF)
HIO-SDF achieves a 46% lower mean global SDF error across all test scenes than a state of the art continuous representation.
arXiv Detail & Related papers (2023-10-14T01:17:56Z) - FineRecon: Depth-aware Feed-forward Network for Detailed 3D
Reconstruction [13.157400338544177]
Recent works on 3D reconstruction from posed images have demonstrated that direct inference of scene-level 3D geometry is feasible using deep neural networks.
We propose three effective solutions for improving the fidelity of inference-based 3D reconstructions.
Our method, FineRecon, produces smooth and highly accurate reconstructions, showing significant improvements across multiple depth and 3D reconstruction metrics.
arXiv Detail & Related papers (2023-04-04T02:50:29Z) - NeuralUDF: Learning Unsigned Distance Fields for Multi-view
Reconstruction of Surfaces with Arbitrary Topologies [87.06532943371575]
We present a novel method, called NeuralUDF, for reconstructing surfaces with arbitrary topologies from 2D images via volume rendering.
In this paper, we propose to represent surfaces as the Unsigned Distance Function (UDF) and develop a new volume rendering scheme to learn the neural UDF representation.
arXiv Detail & Related papers (2022-11-25T15:21:45Z) - iSDF: Real-Time Neural Signed Distance Fields for Robot Perception [64.80458128766254]
iSDF is a continuous learning system for real-time signed distance field reconstruction.
It produces more accurate reconstructions and better approximations of collision costs and gradients.
arXiv Detail & Related papers (2022-04-05T15:48:39Z) - High-fidelity 3D Model Compression based on Key Spheres [6.59007277780362]
We propose an SDF prediction network using explicit key spheres as input.
Our method achieves the high-fidelity and high-compression 3D object coding and reconstruction.
arXiv Detail & Related papers (2022-01-19T09:21:54Z) - 3D Shapes Local Geometry Codes Learning with SDF [8.37542758486152]
A signed distance function (SDF) as the 3D shape description is one of the most effective approaches to represent 3D geometry for rendering and reconstruction.
In this paper, we consider the degeneration problem of reconstruction coming from the capacity decrease of the DeepSDF model.
We propose Local Geometry Code Learning (LGCL), a model that improves the original DeepSDF results by learning from a local shape geometry.
arXiv Detail & Related papers (2021-08-19T09:56:03Z) - S2R-DepthNet: Learning a Generalizable Depth-specific Structural
Representation [63.58891781246175]
Human can infer the 3D geometry of a scene from a sketch instead of a realistic image, which indicates that the spatial structure plays a fundamental role in understanding the depth of scenes.
We are the first to explore the learning of a depth-specific structural representation, which captures the essential feature for depth estimation and ignores irrelevant style information.
Our S2R-DepthNet can be well generalized to unseen real-world data directly even though it is only trained on synthetic data.
arXiv Detail & Related papers (2021-04-02T03:55:41Z) - PLADE-Net: Towards Pixel-Level Accuracy for Self-Supervised Single-View
Depth Estimation with Neural Positional Encoding and Distilled Matting Loss [49.66736599668501]
We propose a self-supervised single-view pixel-level accurate depth estimation network, called PLADE-Net.
Our method shows unprecedented accuracy levels, exceeding 95% in terms of the $delta1$ metric on the KITTI dataset.
arXiv Detail & Related papers (2021-03-12T15:54:46Z) - Neural Geometric Level of Detail: Real-time Rendering with Implicit 3D
Shapes [77.6741486264257]
We introduce an efficient neural representation that, for the first time, enables real-time rendering of high-fidelity neural SDFs.
We show that our representation is 2-3 orders of magnitude more efficient in terms of rendering speed compared to previous works.
arXiv Detail & Related papers (2021-01-26T18:50:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.