Augmenting Implicit Neural Shape Representations with Explicit
Deformation Fields
- URL: http://arxiv.org/abs/2108.08931v1
- Date: Thu, 19 Aug 2021 22:07:08 GMT
- Title: Augmenting Implicit Neural Shape Representations with Explicit
Deformation Fields
- Authors: Matan Atzmon, David Novotny, Andrea Vedaldi, Yaron Lipman
- Abstract summary: Implicit neural representation is a recent approach to learn shape collections as zero level-sets of neural networks.
We advocate deformation-aware regularization for implicit neural representations, aiming at producing plausible deformations as latent code changes.
- Score: 95.39603371087921
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Implicit neural representation is a recent approach to learn shape
collections as zero level-sets of neural networks, where each shape is
represented by a latent code. So far, the focus has been shape reconstruction,
while shape generalization was mostly left to generic encoder-decoder or
auto-decoder regularization.
In this paper we advocate deformation-aware regularization for implicit
neural representations, aiming at producing plausible deformations as latent
code changes. The challenge is that implicit representations do not capture
correspondences between different shapes, which makes it difficult to represent
and regularize their deformations. Thus, we propose to pair the implicit
representation of the shapes with an explicit, piecewise linear deformation
field, learned as an auxiliary function. We demonstrate that, by regularizing
these deformation fields, we can encourage the implicit neural representation
to induce natural deformations in the learned shape space, such as
as-rigid-as-possible deformations.
Related papers
- Self-supervised Learning of Implicit Shape Representation with Dense
Correspondence for Deformable Objects [26.102490905989338]
We propose a novel self-supervised approach to learn neural implicit shape representation for deformable objects.
Our method does not require the priors of skeleton and skinning weight, and only requires a collection of shapes represented in signed distance fields.
Our model can represent shapes with large deformations and can support two typical applications, such as texture transfer and shape editing.
arXiv Detail & Related papers (2023-08-24T06:38:33Z) - Neural Implicit Shape Editing using Boundary Sensitivity [12.621108702820313]
We leverage boundary sensitivity to express how perturbations in parameters move the shape boundary.
With this, we perform geometric editing: finding a parameter update that best approximates a globally prescribed deformation.
arXiv Detail & Related papers (2023-04-24T13:04:15Z) - Reduced Representation of Deformation Fields for Effective Non-rigid
Shape Matching [26.77241999731105]
We present a novel approach for computing correspondences between non-rigid objects by exploiting a reduced representation of deformation fields.
By letting the network learn deformation parameters at a sparse set of positions in space (nodes), we reconstruct the continuous deformation field in a closed-form with guaranteed smoothness.
Our model has high expressive power and is able to capture complex deformations.
arXiv Detail & Related papers (2022-11-26T16:11:17Z) - NeuForm: Adaptive Overfitting for Neural Shape Editing [67.16151288720677]
We propose NEUFORM to combine the advantages of both overfitted and generalizable representations by adaptively using the one most appropriate for each shape region.
We demonstrate edits that successfully reconfigure parts of human-designed shapes, such as chairs, tables, and lamps.
We compare with two state-of-the-art competitors and demonstrate clear improvements in terms of plausibility and fidelity of the resultant edits.
arXiv Detail & Related papers (2022-07-18T19:00:14Z) - Shape-Pose Disentanglement using SE(3)-equivariant Vector Neurons [59.83721247071963]
We introduce an unsupervised technique for encoding point clouds into a canonical shape representation, by disentangling shape and pose.
Our encoder is stable and consistent, meaning that the shape encoding is purely pose-invariant.
The extracted rotation and translation are able to semantically align different input shapes of the same class to a common canonical pose.
arXiv Detail & Related papers (2022-04-03T21:00:44Z) - Frame Averaging for Equivariant Shape Space Learning [85.42901997467754]
A natural way to incorporate symmetries in shape space learning is to ask that the mapping to the shape space (encoder) and mapping from the shape space (decoder) are equivariant to the relevant symmetries.
We present a framework for incorporating equivariance in encoders and decoders by introducing two contributions.
arXiv Detail & Related papers (2021-12-03T06:41:19Z) - SNARF: Differentiable Forward Skinning for Animating Non-Rigid Neural
Implicit Shapes [117.76767853430243]
We introduce SNARF, which combines the advantages of linear blend skinning for polygonal meshes with neural implicit surfaces.
We propose a forward skinning model that finds all canonical correspondences of any deformed point using iterative root finding.
Compared to state-of-the-art neural implicit representations, our approach generalizes better to unseen poses while preserving accuracy.
arXiv Detail & Related papers (2021-04-08T17:54:59Z) - Deep Implicit Templates for 3D Shape Representation [70.9789507686618]
We propose a new 3D shape representation that supports explicit correspondence reasoning in deep implicit representations.
Our key idea is to formulate DIFs as conditional deformations of a template implicit function.
We show that our method can not only learn a common implicit template for a collection of shapes, but also establish dense correspondences across all the shapes simultaneously without any supervision.
arXiv Detail & Related papers (2020-11-30T06:01:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.