Frame Averaging for Equivariant Shape Space Learning
- URL: http://arxiv.org/abs/2112.01741v1
- Date: Fri, 3 Dec 2021 06:41:19 GMT
- Title: Frame Averaging for Equivariant Shape Space Learning
- Authors: Matan Atzmon, Koki Nagano, Sanja Fidler, Sameh Khamis, Yaron Lipman
- Abstract summary: A natural way to incorporate symmetries in shape space learning is to ask that the mapping to the shape space (encoder) and mapping from the shape space (decoder) are equivariant to the relevant symmetries.
We present a framework for incorporating equivariance in encoders and decoders by introducing two contributions.
- Score: 85.42901997467754
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The task of shape space learning involves mapping a train set of shapes to
and from a latent representation space with good generalization properties.
Often, real-world collections of shapes have symmetries, which can be defined
as transformations that do not change the essence of the shape. A natural way
to incorporate symmetries in shape space learning is to ask that the mapping to
the shape space (encoder) and mapping from the shape space (decoder) are
equivariant to the relevant symmetries.
In this paper, we present a framework for incorporating equivariance in
encoders and decoders by introducing two contributions: (i) adapting the recent
Frame Averaging (FA) framework for building generic, efficient, and maximally
expressive Equivariant autoencoders; and (ii) constructing autoencoders
equivariant to piecewise Euclidean motions applied to different parts of the
shape. To the best of our knowledge, this is the first fully piecewise
Euclidean equivariant autoencoder construction. Training our framework is
simple: it uses standard reconstruction losses and does not require the
introduction of new losses. Our architectures are built of standard (backbone)
architectures with the appropriate frame averaging to make them equivariant.
Testing our framework on both rigid shapes dataset using implicit neural
representations, and articulated shape datasets using mesh-based neural
networks show state-of-the-art generalization to unseen test shapes, improving
relevant baselines by a large margin. In particular, our method demonstrates
significant improvement in generalizing to unseen articulated poses.
Related papers
- Tensor Frames -- How To Make Any Message Passing Network Equivariant [15.687514300950813]
We present a novel framework for building equivariant message passing architectures.
We produce state-of-the-art results on normal vector regression on point clouds.
arXiv Detail & Related papers (2024-05-24T09:41:06Z) - AdaContour: Adaptive Contour Descriptor with Hierarchical Representation [52.381359663689004]
Existing angle-based contour descriptors suffer from lossy representation for non-star shapes.
AdaCon is able to represent shapes more accurately robustly than other descriptors.
arXiv Detail & Related papers (2024-04-12T07:30:24Z) - ShapeMatcher: Self-Supervised Joint Shape Canonicalization,
Segmentation, Retrieval and Deformation [47.94499636697971]
We present ShapeMatcher, a unified self-supervised learning framework for joint shape canonicalization, segmentation, retrieval and deformation.
The key insight of ShapeMaker is the simultaneous training of the four highly-associated processes: canonicalization, segmentation, retrieval, and deformation.
arXiv Detail & Related papers (2023-11-18T15:44:57Z) - NeuForm: Adaptive Overfitting for Neural Shape Editing [67.16151288720677]
We propose NEUFORM to combine the advantages of both overfitted and generalizable representations by adaptively using the one most appropriate for each shape region.
We demonstrate edits that successfully reconfigure parts of human-designed shapes, such as chairs, tables, and lamps.
We compare with two state-of-the-art competitors and demonstrate clear improvements in terms of plausibility and fidelity of the resultant edits.
arXiv Detail & Related papers (2022-07-18T19:00:14Z) - Learning Symmetric Embeddings for Equivariant World Models [9.781637768189158]
We propose learning symmetric embedding networks (SENs) that encode an input space (e.g. images)
This network can be trained end-to-end with an equivariant task network to learn an explicitly symmetric representation.
Our experiments demonstrate that SENs facilitate the application of equivariant networks to data with complex symmetry representations.
arXiv Detail & Related papers (2022-04-24T22:31:52Z) - Shape-Pose Disentanglement using SE(3)-equivariant Vector Neurons [59.83721247071963]
We introduce an unsupervised technique for encoding point clouds into a canonical shape representation, by disentangling shape and pose.
Our encoder is stable and consistent, meaning that the shape encoding is purely pose-invariant.
The extracted rotation and translation are able to semantically align different input shapes of the same class to a common canonical pose.
arXiv Detail & Related papers (2022-04-03T21:00:44Z) - Augmenting Implicit Neural Shape Representations with Explicit
Deformation Fields [95.39603371087921]
Implicit neural representation is a recent approach to learn shape collections as zero level-sets of neural networks.
We advocate deformation-aware regularization for implicit neural representations, aiming at producing plausible deformations as latent code changes.
arXiv Detail & Related papers (2021-08-19T22:07:08Z) - Neural Subdivision [58.97214948753937]
This paper introduces Neural Subdivision, a novel framework for data-driven coarseto-fine geometry modeling.
We optimize for the same set of network weights across all local mesh patches, thus providing an architecture that is not constrained to a specific input mesh, fixed genus, or category.
We demonstrate that even when trained on a single high-resolution mesh our method generates reasonable subdivisions for novel shapes.
arXiv Detail & Related papers (2020-05-04T20:03:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.