Neural Implicit Shape Editing using Boundary Sensitivity
- URL: http://arxiv.org/abs/2304.12951v1
- Date: Mon, 24 Apr 2023 13:04:15 GMT
- Title: Neural Implicit Shape Editing using Boundary Sensitivity
- Authors: Arturs Berzins, Moritz Ibing, Leif Kobbelt
- Abstract summary: We leverage boundary sensitivity to express how perturbations in parameters move the shape boundary.
With this, we perform geometric editing: finding a parameter update that best approximates a globally prescribed deformation.
- Score: 12.621108702820313
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neural fields are receiving increased attention as a geometric representation
due to their ability to compactly store detailed and smooth shapes and easily
undergo topological changes. Compared to classic geometry representations,
however, neural representations do not allow the user to exert intuitive
control over the shape. Motivated by this, we leverage boundary sensitivity to
express how perturbations in parameters move the shape boundary. This allows to
interpret the effect of each learnable parameter and study achievable
deformations. With this, we perform geometric editing: finding a parameter
update that best approximates a globally prescribed deformation. Prescribing
the deformation only locally allows the rest of the shape to change according
to some prior, such as semantics or deformation rigidity. Our method is
agnostic to the model its training and updates the NN in-place. Furthermore, we
show how boundary sensitivity helps to optimize and constrain objectives (such
as surface area and volume), which are difficult to compute without first
converting to another representation, such as a mesh.
Related papers
- DragD3D: Realistic Mesh Editing with Rigidity Control Driven by 2D Diffusion Priors [10.355568895429588]
Direct mesh editing and deformation are key components in the geometric modeling and animation pipeline.
Regularizers are not aware of the global context and semantics of the object.
We show that our deformations can be controlled to yield realistic shape deformations aware of the global context.
arXiv Detail & Related papers (2023-10-06T19:55:40Z) - Neural Shape Deformation Priors [14.14047635248036]
We present Neural Shape Deformation Priors, a novel method for shape manipulation.
We learn the deformation behavior based on the underlying geometric properties of a shape.
Our method can be applied to challenging deformations and generalizes well to unseen deformations.
arXiv Detail & Related papers (2022-10-11T17:03:25Z) - DeepMLS: Geometry-Aware Control Point Deformation [76.51312491336343]
We introduce DeepMLS, a space-based deformation technique, guided by a set of displaced control points.
We leverage the power of neural networks to inject the underlying shape geometry into the deformation parameters.
Our technique facilitates intuitive piecewise smooth deformations, which are well suited for manufactured objects.
arXiv Detail & Related papers (2022-01-05T23:55:34Z) - Revisiting Transformation Invariant Geometric Deep Learning: Are Initial
Representations All You Need? [80.86819657126041]
We show that transformation-invariant and distance-preserving initial representations are sufficient to achieve transformation invariance.
Specifically, we realize transformation-invariant and distance-preserving initial point representations by modifying multi-dimensional scaling.
We prove that TinvNN can strictly guarantee transformation invariance, being general and flexible enough to be combined with the existing neural networks.
arXiv Detail & Related papers (2021-12-23T03:52:33Z) - Frame Averaging for Equivariant Shape Space Learning [85.42901997467754]
A natural way to incorporate symmetries in shape space learning is to ask that the mapping to the shape space (encoder) and mapping from the shape space (decoder) are equivariant to the relevant symmetries.
We present a framework for incorporating equivariance in encoders and decoders by introducing two contributions.
arXiv Detail & Related papers (2021-12-03T06:41:19Z) - Augmenting Implicit Neural Shape Representations with Explicit
Deformation Fields [95.39603371087921]
Implicit neural representation is a recent approach to learn shape collections as zero level-sets of neural networks.
We advocate deformation-aware regularization for implicit neural representations, aiming at producing plausible deformations as latent code changes.
arXiv Detail & Related papers (2021-08-19T22:07:08Z) - SNARF: Differentiable Forward Skinning for Animating Non-Rigid Neural
Implicit Shapes [117.76767853430243]
We introduce SNARF, which combines the advantages of linear blend skinning for polygonal meshes with neural implicit surfaces.
We propose a forward skinning model that finds all canonical correspondences of any deformed point using iterative root finding.
Compared to state-of-the-art neural implicit representations, our approach generalizes better to unseen poses while preserving accuracy.
arXiv Detail & Related papers (2021-04-08T17:54:59Z) - NiLBS: Neural Inverse Linear Blend Skinning [59.22647012489496]
We introduce a method to invert the deformations undergone via traditional skinning techniques via a neural network parameterized by pose.
The ability to invert these deformations allows values (e.g., distance function, signed distance function, occupancy) to be pre-computed at rest pose, and then efficiently queried when the character is deformed.
arXiv Detail & Related papers (2020-04-06T20:46:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.