ShapeFlow: Learnable Deformations Among 3D Shapes
- URL: http://arxiv.org/abs/2006.07982v2
- Date: Wed, 23 Jun 2021 22:06:18 GMT
- Title: ShapeFlow: Learnable Deformations Among 3D Shapes
- Authors: Chiyu "Max" Jiang, Jingwei Huang, Andrea Tagliasacchi, Leonidas Guibas
- Abstract summary: We present a flow-based model for learning a deformation space for entire classes of 3D shapes with large intra-class variations.
ShapeFlow allows learning a multi-template deformation space that is agnostic to shape topology, yet preserves fine geometric details.
- Score: 28.854946339507123
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present ShapeFlow, a flow-based model for learning a deformation space for
entire classes of 3D shapes with large intra-class variations. ShapeFlow allows
learning a multi-template deformation space that is agnostic to shape topology,
yet preserves fine geometric details. Different from a generative space where a
latent vector is directly decoded into a shape, a deformation space decodes a
vector into a continuous flow that can advect a source shape towards a target.
Such a space naturally allows the disentanglement of geometric style (coming
from the source) and structural pose (conforming to the target). We parametrize
the deformation between geometries as a learned continuous flow field via a
neural network and show that such deformations can be guaranteed to have
desirable properties, such as be bijectivity, freedom from self-intersections,
or volume preservation. We illustrate the effectiveness of this learned
deformation space for various downstream applications, including shape
generation via deformation, geometric style transfer, unsupervised learning of
a consistent parameterization for entire classes of shapes, and shape
interpolation.
Related papers
- NeuSDFusion: A Spatial-Aware Generative Model for 3D Shape Completion, Reconstruction, and Generation [52.772319840580074]
3D shape generation aims to produce innovative 3D content adhering to specific conditions and constraints.
Existing methods often decompose 3D shapes into a sequence of localized components, treating each element in isolation.
We introduce a novel spatial-aware 3D shape generation framework that leverages 2D plane representations for enhanced 3D shape modeling.
arXiv Detail & Related papers (2024-03-27T04:09:34Z) - Explorable Mesh Deformation Subspaces from Unstructured Generative
Models [53.23510438769862]
Deep generative models of 3D shapes often feature continuous latent spaces that can be used to explore potential variations.
We present a method to explore variations among a given set of landmark shapes by constructing a mapping from an easily-navigable 2D exploration space to a subspace of a pre-trained generative model.
arXiv Detail & Related papers (2023-10-11T18:53:57Z) - Neural Wavelet-domain Diffusion for 3D Shape Generation, Inversion, and
Manipulation [54.09274684734721]
We present a new approach for 3D shape generation, inversion, and manipulation, through a direct generative modeling on a continuous implicit representation in wavelet domain.
Specifically, we propose a compact wavelet representation with a pair of coarse and detail coefficient volumes to implicitly represent 3D shapes via truncated signed distance functions and multi-scale biorthogonal wavelets.
We may jointly train an encoder network to learn a latent space for inverting shapes, allowing us to enable a rich variety of whole-shape and region-aware shape manipulations.
arXiv Detail & Related papers (2023-02-01T02:47:53Z) - Neural Shape Deformation Priors [14.14047635248036]
We present Neural Shape Deformation Priors, a novel method for shape manipulation.
We learn the deformation behavior based on the underlying geometric properties of a shape.
Our method can be applied to challenging deformations and generalizes well to unseen deformations.
arXiv Detail & Related papers (2022-10-11T17:03:25Z) - Topology-Preserving Shape Reconstruction and Registration via Neural
Diffeomorphic Flow [22.1959666473906]
Deep Implicit Functions (DIFs) represent 3D geometry with continuous signed distance functions learned through deep neural nets.
We propose a new model called Neural Diffeomorphic Flow (NDF) to learn deep implicit shape templates.
NDF achieves consistently state-of-the-art organ shape reconstruction and registration results in both accuracy and quality.
arXiv Detail & Related papers (2022-03-16T14:39:11Z) - Cloud Sphere: A 3D Shape Representation via Progressive Deformation [21.216503294296317]
This paper is dedicated to discovering distinctive information from the shape formation process.
A Progressive Deformation-based Auto-Encoder is proposed to learn the stage-aware description.
Experimental results show that the proposed PDAE has the ability to reconstruct 3D shapes with high fidelity.
arXiv Detail & Related papers (2021-12-21T12:10:23Z) - Augmenting Implicit Neural Shape Representations with Explicit
Deformation Fields [95.39603371087921]
Implicit neural representation is a recent approach to learn shape collections as zero level-sets of neural networks.
We advocate deformation-aware regularization for implicit neural representations, aiming at producing plausible deformations as latent code changes.
arXiv Detail & Related papers (2021-08-19T22:07:08Z) - SNARF: Differentiable Forward Skinning for Animating Non-Rigid Neural
Implicit Shapes [117.76767853430243]
We introduce SNARF, which combines the advantages of linear blend skinning for polygonal meshes with neural implicit surfaces.
We propose a forward skinning model that finds all canonical correspondences of any deformed point using iterative root finding.
Compared to state-of-the-art neural implicit representations, our approach generalizes better to unseen poses while preserving accuracy.
arXiv Detail & Related papers (2021-04-08T17:54:59Z) - Disentangling Geometric Deformation Spaces in Generative Latent Shape
Models [5.582957809895198]
A complete representation of 3D objects requires characterizing the space of deformations in an interpretable manner.
We improve on a prior generative model of disentanglement for 3D shapes, wherein the space of object geometry is factorized into rigid orientation, non-rigid pose, and intrinsic shape.
The resulting model can be trained from raw 3D shapes, without correspondences, labels, or even rigid alignment.
arXiv Detail & Related papers (2021-02-27T06:54:31Z) - ResNet-LDDMM: Advancing the LDDMM Framework Using Deep Residual Networks [86.37110868126548]
In this work, we make use of deep residual neural networks to solve the non-stationary ODE (flow equation) based on a Euler's discretization scheme.
We illustrate these ideas on diverse registration problems of 3D shapes under complex topology-preserving transformations.
arXiv Detail & Related papers (2021-02-16T04:07:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.