Shape-Pose Disentanglement using SE(3)-equivariant Vector Neurons
- URL: http://arxiv.org/abs/2204.01159v1
- Date: Sun, 3 Apr 2022 21:00:44 GMT
- Title: Shape-Pose Disentanglement using SE(3)-equivariant Vector Neurons
- Authors: Oren Katzir, Dani Lischinski, Daniel Cohen-Or
- Abstract summary: We introduce an unsupervised technique for encoding point clouds into a canonical shape representation, by disentangling shape and pose.
Our encoder is stable and consistent, meaning that the shape encoding is purely pose-invariant.
The extracted rotation and translation are able to semantically align different input shapes of the same class to a common canonical pose.
- Score: 59.83721247071963
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce an unsupervised technique for encoding point clouds into a
canonical shape representation, by disentangling shape and pose. Our encoder is
stable and consistent, meaning that the shape encoding is purely
pose-invariant, while the extracted rotation and translation are able to
semantically align different input shapes of the same class to a common
canonical pose. Specifically, we design an auto-encoder based on Vector Neuron
Networks, a rotation-equivariant neural network, whose layers we extend to
provide translation-equivariance in addition to rotation-equivariance only. The
resulting encoder produces pose-invariant shape encoding by construction,
enabling our approach to focus on learning a consistent canonical pose for a
class of objects. Quantitative and qualitative experiments validate the
superior stability and consistency of our approach.
Related papers
- Learning SO(3)-Invariant Semantic Correspondence via Local Shape Transform [62.27337227010514]
We introduce a novel self-supervised Rotation-Invariant 3D correspondence learner with Local Shape Transform, dubbed RIST.
RIST learns to establish dense correspondences between shapes even under challenging intra-class variations and arbitrary orientations.
RIST demonstrates state-of-the-art performances on 3D part label transfer and semantic keypoint transfer given arbitrarily rotated point cloud pairs.
arXiv Detail & Related papers (2024-04-17T08:09:25Z) - PaRot: Patch-Wise Rotation-Invariant Network via Feature Disentanglement
and Pose Restoration [16.75367717130046]
State-of-the-art models are not robust to rotations, which remains an unknown prior to real applications.
We introduce a novel Patch-wise Rotation-invariant network (PaRot)
Our disentanglement module extracts high-quality rotation-robust features and the proposed lightweight model achieves competitive results.
arXiv Detail & Related papers (2023-02-06T02:13:51Z) - Rethinking Rotation Invariance with Point Cloud Registration [18.829454172955202]
We propose an effective framework for rotation invariance learning via three sequential stages, namely rotation-invariant shape encoding, aligned feature integration, and deep feature registration.
Experimental results on 3D shape classification, part segmentation, and retrieval tasks prove the feasibility of our work.
arXiv Detail & Related papers (2022-12-31T08:17:09Z) - ConDor: Self-Supervised Canonicalization of 3D Pose for Partial Shapes [55.689763519293464]
ConDor is a self-supervised method that learns to canonicalize the 3D orientation and position for full and partial 3D point clouds.
During inference, our method takes an unseen full or partial 3D point cloud at an arbitrary pose and outputs an equivariant canonical pose.
arXiv Detail & Related papers (2022-01-19T18:57:21Z) - Frame Averaging for Equivariant Shape Space Learning [85.42901997467754]
A natural way to incorporate symmetries in shape space learning is to ask that the mapping to the shape space (encoder) and mapping from the shape space (decoder) are equivariant to the relevant symmetries.
We present a framework for incorporating equivariance in encoders and decoders by introducing two contributions.
arXiv Detail & Related papers (2021-12-03T06:41:19Z) - Augmenting Implicit Neural Shape Representations with Explicit
Deformation Fields [95.39603371087921]
Implicit neural representation is a recent approach to learn shape collections as zero level-sets of neural networks.
We advocate deformation-aware regularization for implicit neural representations, aiming at producing plausible deformations as latent code changes.
arXiv Detail & Related papers (2021-08-19T22:07:08Z) - Learning 3D Dense Correspondence via Canonical Point Autoencoder [108.20735652143787]
We propose a canonical point autoencoder (CPAE) that predicts dense correspondences between 3D shapes of the same category.
The autoencoder performs two key functions: (a) encoding an arbitrarily ordered point cloud to a canonical primitive, and (b) decoding the primitive back to the original input instance shape.
arXiv Detail & Related papers (2021-07-10T15:54:48Z) - Rotation-Invariant Point Convolution With Multiple Equivariant
Alignments [1.0152838128195467]
We show that using rotation-equivariant alignments, it is possible to make any convolutional layer rotation-invariant.
With this core layer, we design rotation-invariant architectures which improve state-of-the-art results in both object classification and semantic segmentation.
arXiv Detail & Related papers (2020-12-07T20:47:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.