Topologically-Aware Deformation Fields for Single-View 3D Reconstruction
- URL: http://arxiv.org/abs/2205.06267v1
- Date: Thu, 12 May 2022 17:59:59 GMT
- Title: Topologically-Aware Deformation Fields for Single-View 3D Reconstruction
- Authors: Shivam Duggal, Deepak Pathak
- Abstract summary: We present a new framework for learning 3D object shapes and dense cross-object 3D correspondences from just an unaligned category-specific image collection.
The 3D shapes are generated implicitly as deformations to a category-specific signed distance field.
Our approach, dubbed TARS, achieves state-of-the-art reconstruction fidelity on several datasets.
- Score: 30.738926104317514
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a new framework for learning 3D object shapes and dense
cross-object 3D correspondences from just an unaligned category-specific image
collection. The 3D shapes are generated implicitly as deformations to a
category-specific signed distance field and are learned in an unsupervised
manner solely from unaligned image collections without any 3D supervision.
Generally, image collections on the internet contain several intra-category
geometric and topological variations, for example, different chairs can have
different topologies, which makes the task of joint shape and correspondence
estimation much more challenging. Because of this, prior works either focus on
learning each 3D object shape individually without modeling cross-instance
correspondences or perform joint shape and correspondence estimation on
categories with minimal intra-category topological variations. We overcome
these restrictions by learning a topologically-aware implicit deformation field
that maps a 3D point in the object space to a higher dimensional point in the
category-specific canonical space. At inference time, given a single image, we
reconstruct the underlying 3D shape by first implicitly deforming each 3D point
in the object space to the learned category-specific canonical space using the
topologically-aware deformation field and then reconstructing the 3D shape as a
canonical signed distance field. Both canonical shape and deformation field are
learned end-to-end in an inverse-graphics fashion using a learned recurrent ray
marcher (SRN) as a differentiable rendering module. Our approach, dubbed TARS,
achieves state-of-the-art reconstruction fidelity on several datasets:
ShapeNet, Pascal3D+, CUB, and Pix3D chairs. Result videos and code at
https://shivamduggal4.github.io/tars-3D/
Related papers
- NeuSDFusion: A Spatial-Aware Generative Model for 3D Shape Completion, Reconstruction, and Generation [52.772319840580074]
3D shape generation aims to produce innovative 3D content adhering to specific conditions and constraints.
Existing methods often decompose 3D shapes into a sequence of localized components, treating each element in isolation.
We introduce a novel spatial-aware 3D shape generation framework that leverages 2D plane representations for enhanced 3D shape modeling.
arXiv Detail & Related papers (2024-03-27T04:09:34Z) - 3D Surface Reconstruction in the Wild by Deforming Shape Priors from
Synthetic Data [24.97027425606138]
Reconstructing the underlying 3D surface of an object from a single image is a challenging problem.
We present a new method for joint category-specific 3D reconstruction and object pose estimation from a single image.
Our approach achieves state-of-the-art reconstruction performance across several real-world datasets.
arXiv Detail & Related papers (2023-02-24T20:37:27Z) - ConDor: Self-Supervised Canonicalization of 3D Pose for Partial Shapes [55.689763519293464]
ConDor is a self-supervised method that learns to canonicalize the 3D orientation and position for full and partial 3D point clouds.
During inference, our method takes an unseen full or partial 3D point cloud at an arbitrary pose and outputs an equivariant canonical pose.
arXiv Detail & Related papers (2022-01-19T18:57:21Z) - Multi-Category Mesh Reconstruction From Image Collections [90.24365811344987]
We present an alternative approach that infers the textured mesh of objects combining a series of deformable 3D models and a set of instance-specific deformation, pose, and texture.
Our method is trained with images of multiple object categories using only foreground masks and rough camera poses as supervision.
Experiments show that the proposed framework can distinguish between different object categories and learn category-specific shape priors in an unsupervised manner.
arXiv Detail & Related papers (2021-10-21T16:32:31Z) - Learning Canonical 3D Object Representation for Fine-Grained Recognition [77.33501114409036]
We propose a novel framework for fine-grained object recognition that learns to recover object variation in 3D space from a single image.
We represent an object as a composition of 3D shape and its appearance, while eliminating the effect of camera viewpoint.
By incorporating 3D shape and appearance jointly in a deep representation, our method learns the discriminative representation of the object.
arXiv Detail & Related papers (2021-08-10T12:19:34Z) - Canonical 3D Deformer Maps: Unifying parametric and non-parametric
methods for dense weakly-supervised category reconstruction [79.98689027127855]
We propose a new representation of the 3D shape of common object categories that can be learned from a collection of 2D images of independent objects.
Our method builds in a novel way on concepts from parametric deformation models, non-parametric 3D reconstruction, and canonical embeddings.
It achieves state-of-the-art results in dense 3D reconstruction on public in-the-wild datasets of faces, cars, and birds.
arXiv Detail & Related papers (2020-08-28T15:44:05Z) - Fine-Grained 3D Shape Classification with Hierarchical Part-View
Attentions [70.0171362989609]
We propose a novel fine-grained 3D shape classification method named FG3D-Net to capture the fine-grained local details of 3D shapes from multiple rendered views.
Our results under the fine-grained 3D shape dataset show that our method outperforms other state-of-the-art methods.
arXiv Detail & Related papers (2020-05-26T06:53:19Z) - 3D Shape Segmentation with Geometric Deep Learning [2.512827436728378]
We propose a neural-network based approach that produces 3D augmented views of the 3D shape to solve the whole segmentation as sub-segmentation problems.
We validate our approach using 3D shapes of publicly available datasets and of real objects that are reconstructed using photogrammetry techniques.
arXiv Detail & Related papers (2020-02-02T14:11:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.