Neural Shape Deformation Priors
- URL: http://arxiv.org/abs/2210.05616v1
- Date: Tue, 11 Oct 2022 17:03:25 GMT
- Title: Neural Shape Deformation Priors
- Authors: Jiapeng Tang, Lev Markhasin, Bi Wang, Justus Thies, Matthias
Nie{\ss}ner
- Abstract summary: We present Neural Shape Deformation Priors, a novel method for shape manipulation.
We learn the deformation behavior based on the underlying geometric properties of a shape.
Our method can be applied to challenging deformations and generalizes well to unseen deformations.
- Score: 14.14047635248036
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: We present Neural Shape Deformation Priors, a novel method for shape
manipulation that predicts mesh deformations of non-rigid objects from
user-provided handle movements. State-of-the-art methods cast this problem as
an optimization task, where the input source mesh is iteratively deformed to
minimize an objective function according to hand-crafted regularizers such as
ARAP. In this work, we learn the deformation behavior based on the underlying
geometric properties of a shape, while leveraging a large-scale dataset
containing a diverse set of non-rigid deformations. Specifically, given a
source mesh and desired target locations of handles that describe the partial
surface deformation, we predict a continuous deformation field that is defined
in 3D space to describe the space deformation. To this end, we introduce
transformer-based deformation networks that represent a shape deformation as a
composition of local surface deformations. It learns a set of local latent
codes anchored in 3D space, from which we can learn a set of continuous
deformation functions for local surfaces. Our method can be applied to
challenging deformations and generalizes well to unseen deformations. We
validate our approach in experiments using the DeformingThing4D dataset, and
compare to both classic optimization-based and recent neural network-based
methods.
Related papers
- Explorable Mesh Deformation Subspaces from Unstructured Generative
Models [53.23510438769862]
Deep generative models of 3D shapes often feature continuous latent spaces that can be used to explore potential variations.
We present a method to explore variations among a given set of landmark shapes by constructing a mapping from an easily-navigable 2D exploration space to a subspace of a pre-trained generative model.
arXiv Detail & Related papers (2023-10-11T18:53:57Z) - DragD3D: Realistic Mesh Editing with Rigidity Control Driven by 2D Diffusion Priors [10.355568895429588]
Direct mesh editing and deformation are key components in the geometric modeling and animation pipeline.
Regularizers are not aware of the global context and semantics of the object.
We show that our deformations can be controlled to yield realistic shape deformations aware of the global context.
arXiv Detail & Related papers (2023-10-06T19:55:40Z) - Neural Implicit Shape Editing using Boundary Sensitivity [12.621108702820313]
We leverage boundary sensitivity to express how perturbations in parameters move the shape boundary.
With this, we perform geometric editing: finding a parameter update that best approximates a globally prescribed deformation.
arXiv Detail & Related papers (2023-04-24T13:04:15Z) - Deforming Radiance Fields with Cages [65.57101724686527]
We propose a new type of deformation of the radiance field: free-form radiance field deformation.
We use a triangular mesh that encloses the foreground object called cage as an interface.
We propose a novel formulation to extend it to the radiance field, which maps the position and the view direction of the sampling points from the deformed space to the canonical space.
arXiv Detail & Related papers (2022-07-25T16:08:55Z) - Pop-Out Motion: 3D-Aware Image Deformation via Learning the Shape
Laplacian [58.704089101826774]
We present a 3D-aware image deformation method with minimal restrictions on shape category and deformation type.
We take a supervised learning-based approach to predict the shape Laplacian of the underlying volume of a 3D reconstruction represented as a point cloud.
In the experiments, we present our results of deforming 2D character and clothed human images.
arXiv Detail & Related papers (2022-03-29T04:57:18Z) - DeepMLS: Geometry-Aware Control Point Deformation [76.51312491336343]
We introduce DeepMLS, a space-based deformation technique, guided by a set of displaced control points.
We leverage the power of neural networks to inject the underlying shape geometry into the deformation parameters.
Our technique facilitates intuitive piecewise smooth deformations, which are well suited for manufactured objects.
arXiv Detail & Related papers (2022-01-05T23:55:34Z) - Augmenting Implicit Neural Shape Representations with Explicit
Deformation Fields [95.39603371087921]
Implicit neural representation is a recent approach to learn shape collections as zero level-sets of neural networks.
We advocate deformation-aware regularization for implicit neural representations, aiming at producing plausible deformations as latent code changes.
arXiv Detail & Related papers (2021-08-19T22:07:08Z) - SNARF: Differentiable Forward Skinning for Animating Non-Rigid Neural
Implicit Shapes [117.76767853430243]
We introduce SNARF, which combines the advantages of linear blend skinning for polygonal meshes with neural implicit surfaces.
We propose a forward skinning model that finds all canonical correspondences of any deformed point using iterative root finding.
Compared to state-of-the-art neural implicit representations, our approach generalizes better to unseen poses while preserving accuracy.
arXiv Detail & Related papers (2021-04-08T17:54:59Z) - Disentangling Geometric Deformation Spaces in Generative Latent Shape
Models [5.582957809895198]
A complete representation of 3D objects requires characterizing the space of deformations in an interpretable manner.
We improve on a prior generative model of disentanglement for 3D shapes, wherein the space of object geometry is factorized into rigid orientation, non-rigid pose, and intrinsic shape.
The resulting model can be trained from raw 3D shapes, without correspondences, labels, or even rigid alignment.
arXiv Detail & Related papers (2021-02-27T06:54:31Z) - ShapeFlow: Learnable Deformations Among 3D Shapes [28.854946339507123]
We present a flow-based model for learning a deformation space for entire classes of 3D shapes with large intra-class variations.
ShapeFlow allows learning a multi-template deformation space that is agnostic to shape topology, yet preserves fine geometric details.
arXiv Detail & Related papers (2020-06-14T19:03:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.