Deep Implicit Templates for 3D Shape Representation
- URL: http://arxiv.org/abs/2011.14565v2
- Date: Thu, 13 May 2021 09:22:32 GMT
- Title: Deep Implicit Templates for 3D Shape Representation
- Authors: Zerong Zheng, Tao Yu, Qionghai Dai, Yebin Liu
- Abstract summary: We propose a new 3D shape representation that supports explicit correspondence reasoning in deep implicit representations.
Our key idea is to formulate DIFs as conditional deformations of a template implicit function.
We show that our method can not only learn a common implicit template for a collection of shapes, but also establish dense correspondences across all the shapes simultaneously without any supervision.
- Score: 70.9789507686618
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep implicit functions (DIFs), as a kind of 3D shape representation, are
becoming more and more popular in the 3D vision community due to their
compactness and strong representation power. However, unlike polygon mesh-based
templates, it remains a challenge to reason dense correspondences or other
semantic relationships across shapes represented by DIFs, which limits its
applications in texture transfer, shape analysis and so on. To overcome this
limitation and also make DIFs more interpretable, we propose Deep Implicit
Templates, a new 3D shape representation that supports explicit correspondence
reasoning in deep implicit representations. Our key idea is to formulate DIFs
as conditional deformations of a template implicit function. To this end, we
propose Spatial Warping LSTM, which decomposes the conditional spatial
transformation into multiple affine transformations and guarantees
generalization capability. Moreover, the training loss is carefully designed in
order to achieve high reconstruction accuracy while learning a plausible
template with accurate correspondences in an unsupervised manner. Experiments
show that our method can not only learn a common implicit template for a
collection of shapes, but also establish dense correspondences across all the
shapes simultaneously without any supervision.
Related papers
- NeuSDFusion: A Spatial-Aware Generative Model for 3D Shape Completion, Reconstruction, and Generation [52.772319840580074]
3D shape generation aims to produce innovative 3D content adhering to specific conditions and constraints.
Existing methods often decompose 3D shapes into a sequence of localized components, treating each element in isolation.
We introduce a novel spatial-aware 3D shape generation framework that leverages 2D plane representations for enhanced 3D shape modeling.
arXiv Detail & Related papers (2024-03-27T04:09:34Z) - ReshapeIT: Reliable Shape Interaction with Implicit Template for Anatomical Structure Reconstruction [59.971808117043366]
ReShapeIT represents an anatomical structure with an implicit template field shared within the same category.
It ensures the implicit template field generates valid templates by strengthening the constraint of the correspondence between the instance shape and the template shape.
A template Interaction Module is introduced to reconstruct unseen shapes by interacting the valid template shapes with the instance-wise latent codes.
arXiv Detail & Related papers (2023-12-11T07:09:32Z) - Self-supervised Learning of Implicit Shape Representation with Dense
Correspondence for Deformable Objects [26.102490905989338]
We propose a novel self-supervised approach to learn neural implicit shape representation for deformable objects.
Our method does not require the priors of skeleton and skinning weight, and only requires a collection of shapes represented in signed distance fields.
Our model can represent shapes with large deformations and can support two typical applications, such as texture transfer and shape editing.
arXiv Detail & Related papers (2023-08-24T06:38:33Z) - Hybrid Neural Diffeomorphic Flow for Shape Representation and Generation
via Triplane [16.684276798449115]
HNDF is a method that implicitly learns the underlying representation and decomposes intricate dense correspondences into explicitly axis-aligned triplane features.
Unlike conventional approaches that directly generate new 3D shapes, we explore the idea of shape generation with deformed template shape via diffeomorphic flows.
arXiv Detail & Related papers (2023-07-04T23:28:01Z) - 3D Equivariant Graph Implicit Functions [51.5559264447605]
We introduce a novel family of graph implicit functions with equivariant layers that facilitates modeling fine local details.
Our method improves over the existing rotation-equivariant implicit function from 0.69 to 0.89 on the ShapeNet reconstruction task.
arXiv Detail & Related papers (2022-03-31T16:51:25Z) - Topology-Preserving Shape Reconstruction and Registration via Neural
Diffeomorphic Flow [22.1959666473906]
Deep Implicit Functions (DIFs) represent 3D geometry with continuous signed distance functions learned through deep neural nets.
We propose a new model called Neural Diffeomorphic Flow (NDF) to learn deep implicit shape templates.
NDF achieves consistently state-of-the-art organ shape reconstruction and registration results in both accuracy and quality.
arXiv Detail & Related papers (2022-03-16T14:39:11Z) - ShapeFormer: Transformer-based Shape Completion via Sparse
Representation [41.33457875133559]
We present ShapeFormer, a network that produces a distribution of object completions conditioned on incomplete, and possibly noisy, point clouds.
The resultant distribution can then be sampled to generate likely completions, each exhibiting plausible shape details while being faithful to the input.
arXiv Detail & Related papers (2022-01-25T13:58:30Z) - Deep Marching Tetrahedra: a Hybrid Representation for High-Resolution 3D
Shape Synthesis [90.26556260531707]
DMTet is a conditional generative model that can synthesize high-resolution 3D shapes using simple user guides such as coarse voxels.
Unlike deep 3D generative models that directly generate explicit representations such as meshes, our model can synthesize shapes with arbitrary topology.
arXiv Detail & Related papers (2021-11-08T05:29:35Z) - SNARF: Differentiable Forward Skinning for Animating Non-Rigid Neural
Implicit Shapes [117.76767853430243]
We introduce SNARF, which combines the advantages of linear blend skinning for polygonal meshes with neural implicit surfaces.
We propose a forward skinning model that finds all canonical correspondences of any deformed point using iterative root finding.
Compared to state-of-the-art neural implicit representations, our approach generalizes better to unseen poses while preserving accuracy.
arXiv Detail & Related papers (2021-04-08T17:54:59Z) - Deformed Implicit Field: Modeling 3D Shapes with Learned Dense
Correspondence [30.849927968528238]
We propose a novel Deformed Implicit Field representation for modeling 3D shapes of a category.
Our neural network, dubbed DIF-Net, jointly learns a shape latent space and these fields for 3D objects belonging to a category.
Experiments show that DIF-Net not only produces high-fidelity 3D shapes but also builds high-quality dense correspondences across different shapes.
arXiv Detail & Related papers (2020-11-27T10:45:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.