Deep regression on manifolds: a 3D rotation case study
- URL: http://arxiv.org/abs/2103.16317v1
- Date: Tue, 30 Mar 2021 13:07:36 GMT
- Title: Deep regression on manifolds: a 3D rotation case study
- Authors: Romain Br\'egier
- Abstract summary: We show that a differentiable function mapping arbitrary inputs of a Euclidean space onto this manifold should satisfy to allow proper training.
We compare various differentiable mappings on the 3D rotation space, and conjecture about the importance of the local linearity of the mapping.
We notably show that a mapping based on Procrustes orthonormalization of a 3x3 matrix generally performs best among the ones considered.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many problems in machine learning involve regressing outputs that do not lie
on a Euclidean space, such as a discrete probability distribution, or the pose
of an object. An approach to tackle these problems through gradient-based
learning consists in including in the deep learning architecture a
differentiable function mapping arbitrary inputs of a Euclidean space onto this
manifold. In this work, we establish a set of properties that such mapping
should satisfy to allow proper training, and illustrate it in the case of 3D
rotations. Through theoretical considerations and methodological experiments on
a variety of tasks, we compare various differentiable mappings on the 3D
rotation space, and conjecture about the importance of the local linearity of
the mapping. We notably show that a mapping based on Procrustes
orthonormalization of a 3x3 matrix generally performs best among the ones
considered, but that rotation-vector representation might also be suitable when
restricted to small angles.
Related papers
- Non-parametric regression for robot learning on manifolds [0.0]
In robot learning, manifold-valued data are often handled by relating the manifold to a suitable Euclidean space.
We propose an "intrinsic" approach to regression that works directly within the manifold.
arXiv Detail & Related papers (2023-10-30T14:17:32Z) - Evaluating 3D Shape Analysis Methods for Robustness to Rotation
Invariance [22.306775502181818]
This paper analyzes the robustness of recent 3D shape descriptors to SO(3) rotations.
We consider a database of 3D indoor scenes, where objects occur in different orientations.
arXiv Detail & Related papers (2023-05-29T18:39:31Z) - Neural Vector Fields: Implicit Representation by Explicit Learning [63.337294707047036]
We propose a novel 3D representation method, Neural Vector Fields (NVF)
It not only adopts the explicit learning process to manipulate meshes directly, but also the implicit representation of unsigned distance functions (UDFs)
Our method first predicts displacement queries towards the surface and models shapes as text reconstructions.
arXiv Detail & Related papers (2023-03-08T02:36:09Z) - Measuring dissimilarity with diffeomorphism invariance [94.02751799024684]
We introduce DID, a pairwise dissimilarity measure applicable to a wide range of data spaces.
We prove that DID enjoys properties which make it relevant for theoretical study and practical use.
arXiv Detail & Related papers (2022-02-11T13:51:30Z) - DeepMesh: Differentiable Iso-Surface Extraction [53.77622255726208]
We introduce a differentiable way to produce explicit surface mesh representations from Deep Implicit Fields.
Our key insight is that by reasoning on how implicit field perturbations impact local surface geometry, one can ultimately differentiate the 3D location of surface samples.
We exploit this to define DeepMesh -- end-to-end differentiable mesh representation that can vary its topology.
arXiv Detail & Related papers (2021-06-20T20:12:41Z) - Equivariant Point Network for 3D Point Cloud Analysis [17.689949017410836]
We propose an effective and practical SE(3) (3D translation and rotation) equivariant network for point cloud analysis.
First, we present SE(3) separable point convolution, a novel framework that breaks down the 6D convolution into two separable convolutional operators.
Second, we introduce an attention layer to effectively harness the expressiveness of the equivariant features.
arXiv Detail & Related papers (2021-03-25T21:57:10Z) - Adjoint Rigid Transform Network: Task-conditioned Alignment of 3D Shapes [86.2129580231191]
Adjoint Rigid Transform (ART) Network is a neural module which can be integrated with a variety of 3D networks.
ART learns to rotate input shapes to a learned canonical orientation, which is crucial for a lot of tasks.
We will release our code and pre-trained models for further research.
arXiv Detail & Related papers (2021-02-01T20:58:45Z) - Rotation-Invariant Point Convolution With Multiple Equivariant
Alignments [1.0152838128195467]
We show that using rotation-equivariant alignments, it is possible to make any convolutional layer rotation-invariant.
With this core layer, we design rotation-invariant architectures which improve state-of-the-art results in both object classification and semantic segmentation.
arXiv Detail & Related papers (2020-12-07T20:47:46Z) - Learning to Orient Surfaces by Self-supervised Spherical CNNs [15.554429755106332]
Defining and reliably finding a canonical orientation for 3D surfaces is key to many Computer Vision and Robotics applications.
We show the feasibility of learning a robust canonical orientation for surfaces represented as point clouds.
Our method learns such feature maps from raw data by a self-supervised training procedure and robustly selects a rotation to transform the input point cloud into a learned canonical orientation.
arXiv Detail & Related papers (2020-11-06T11:43:57Z) - An Analysis of SVD for Deep Rotation Estimation [63.97835949897361]
We present a theoretical analysis that shows SVD is the natural choice for projecting onto the rotation group.
Our analysis shows simply replacing existing representations with the SVD orthogonalization procedure obtains state of the art performance in many deep learning applications.
arXiv Detail & Related papers (2020-06-25T17:58:28Z) - Disentangling by Subspace Diffusion [72.1895236605335]
We show that fully unsupervised factorization of a data manifold is possible if the true metric of the manifold is known.
Our work reduces the question of whether unsupervised metric learning is possible, providing a unifying insight into the geometric nature of representation learning.
arXiv Detail & Related papers (2020-06-23T13:33:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.