SDF-SRN: Learning Signed Distance 3D Object Reconstruction from Static
Images
- URL: http://arxiv.org/abs/2010.10505v1
- Date: Tue, 20 Oct 2020 17:59:47 GMT
- Title: SDF-SRN: Learning Signed Distance 3D Object Reconstruction from Static
Images
- Authors: Chen-Hsuan Lin, Chaoyang Wang, Simon Lucey
- Abstract summary: Recent efforts have turned to learning 3D reconstruction without 3D supervision from RGB images with annotated 2D silhouettes.
These techniques still require multi-view annotations of the same object instance during training.
We propose SDF-SRN, an approach that requires only a single view of objects at training time.
- Score: 44.78174845839193
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Dense 3D object reconstruction from a single image has recently witnessed
remarkable advances, but supervising neural networks with ground-truth 3D
shapes is impractical due to the laborious process of creating paired
image-shape datasets. Recent efforts have turned to learning 3D reconstruction
without 3D supervision from RGB images with annotated 2D silhouettes,
dramatically reducing the cost and effort of annotation. These techniques,
however, remain impractical as they still require multi-view annotations of the
same object instance during training. As a result, most experimental efforts to
date have been limited to synthetic datasets. In this paper, we address this
issue and propose SDF-SRN, an approach that requires only a single view of
objects at training time, offering greater utility for real-world scenarios.
SDF-SRN learns implicit 3D shape representations to handle arbitrary shape
topologies that may exist in the datasets. To this end, we derive a novel
differentiable rendering formulation for learning signed distance functions
(SDF) from 2D silhouettes. Our method outperforms the state of the art under
challenging single-view supervision settings on both synthetic and real-world
datasets.
Related papers
- Few-Shot Unsupervised Implicit Neural Shape Representation Learning with Spatial Adversaries [8.732260277121547]
Implicit Neural Representations have gained prominence as a powerful framework for capturing complex data modalities.
Within the realm of 3D shape representation, Neural Signed Distance Functions (SDF) have demonstrated remarkable potential in faithfully encoding intricate shape geometry.
arXiv Detail & Related papers (2024-08-27T14:54:33Z) - Enhancing Generalizability of Representation Learning for Data-Efficient 3D Scene Understanding [50.448520056844885]
We propose a generative Bayesian network to produce diverse synthetic scenes with real-world patterns.
A series of experiments robustly display our method's consistent superiority over existing state-of-the-art pre-training approaches.
arXiv Detail & Related papers (2024-06-17T07:43:53Z) - SUGAR: Pre-training 3D Visual Representations for Robotics [85.55534363501131]
We introduce a novel 3D pre-training framework for robotics named SUGAR.
SUGAR captures semantic, geometric and affordance properties of objects through 3D point clouds.
We show that SUGAR's 3D representation outperforms state-of-the-art 2D and 3D representations.
arXiv Detail & Related papers (2024-04-01T21:23:03Z) - 3D Surface Reconstruction in the Wild by Deforming Shape Priors from
Synthetic Data [24.97027425606138]
Reconstructing the underlying 3D surface of an object from a single image is a challenging problem.
We present a new method for joint category-specific 3D reconstruction and object pose estimation from a single image.
Our approach achieves state-of-the-art reconstruction performance across several real-world datasets.
arXiv Detail & Related papers (2023-02-24T20:37:27Z) - RandomRooms: Unsupervised Pre-training from Synthetic Shapes and
Randomized Layouts for 3D Object Detection [138.2892824662943]
A promising solution is to make better use of the synthetic dataset, which consists of CAD object models, to boost the learning on real datasets.
Recent work on 3D pre-training exhibits failure when transfer features learned on synthetic objects to other real-world applications.
In this work, we put forward a new method called RandomRooms to accomplish this objective.
arXiv Detail & Related papers (2021-08-17T17:56:12Z) - Neural Geometric Level of Detail: Real-time Rendering with Implicit 3D
Shapes [77.6741486264257]
We introduce an efficient neural representation that, for the first time, enables real-time rendering of high-fidelity neural SDFs.
We show that our representation is 2-3 orders of magnitude more efficient in terms of rendering speed compared to previous works.
arXiv Detail & Related papers (2021-01-26T18:50:22Z) - Procrustean Regression Networks: Learning 3D Structure of Non-Rigid
Objects from 2D Annotations [42.476537776831314]
We propose a novel framework for training neural networks which is capable of learning 3D information of non-rigid objects.
The proposed framework shows superior reconstruction performance to the state-of-the-art method on the Human 3.6M, 300-VW, and SURREAL datasets.
arXiv Detail & Related papers (2020-07-21T17:29:20Z) - Self-Supervised 2D Image to 3D Shape Translation with Disentangled
Representations [92.89846887298852]
We present a framework to translate between 2D image views and 3D object shapes.
We propose SIST, a Self-supervised Image to Shape Translation framework.
arXiv Detail & Related papers (2020-03-22T22:44:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.