Non-parametric regression for robot learning on manifolds
- URL: http://arxiv.org/abs/2310.19561v2
- Date: Tue, 14 May 2024 11:11:43 GMT
- Title: Non-parametric regression for robot learning on manifolds
- Authors: P. C. Lopez-Custodio, K. Bharath, A. Kucukyilmaz, S. P. Preston,
- Abstract summary: In robot learning, manifold-valued data are often handled by relating the manifold to a suitable Euclidean space.
We propose an "intrinsic" approach to regression that works directly within the manifold.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Many of the tools available for robot learning were designed for Euclidean data. However, many applications in robotics involve manifold-valued data. A common example is orientation; this can be represented as a 3-by-3 rotation matrix or a quaternion, the spaces of which are non-Euclidean manifolds. In robot learning, manifold-valued data are often handled by relating the manifold to a suitable Euclidean space, either by embedding the manifold or by projecting the data onto one or several tangent spaces. These approaches can result in poor predictive accuracy, and convoluted algorithms. In this paper, we propose an "intrinsic" approach to regression that works directly within the manifold. It involves taking a suitable probability distribution on the manifold, letting its parameter be a function of a predictor variable, such as time, then estimating that function non-parametrically via a "local likelihood" method that incorporates a kernel. We name the method kernelised likelihood estimation. The approach is conceptually simple, and generally applicable to different manifolds. We implement it with three different types of manifold-valued data that commonly appear in robotics applications. The results of these experiments show better predictive accuracy than projection-based algorithms.
Related papers
- Riemannian coordinate descent algorithms on matrix manifolds [12.05722932030768]
We provide a general framework for developing computationally efficient coordinate descent (CD) algorithms on matrix manifold.
We propose CD algorithms for various manifold such as Stiefel, Grassmann, (generalized) hyperbolic, symplectic, and symmetric positive (semi)definite.
We analyze their convergence and complexity, and empirically illustrate their efficacy in several applications.
arXiv Detail & Related papers (2024-06-04T11:37:11Z) - Learning on manifolds without manifold learning [0.0]
Function approximation based on data drawn randomly from an unknown distribution is an important problem in machine learning.
In this paper, we project the unknown manifold as a submanifold ambient hypersphere and study the question of constructing a one-shot approximation using specially designed kernels on the hypersphere.
arXiv Detail & Related papers (2024-02-20T03:27:53Z) - Implicit Manifold Gaussian Process Regression [49.0787777751317]
Gaussian process regression is widely used to provide well-calibrated uncertainty estimates.
It struggles with high-dimensional data because of the implicit low-dimensional manifold upon which the data actually lies.
In this paper we propose a technique capable of inferring implicit structure directly from data (labeled and unlabeled) in a fully differentiable way.
arXiv Detail & Related papers (2023-10-30T09:52:48Z) - Manifold Learning with Sparse Regularised Optimal Transport [0.17205106391379024]
Real-world datasets are subject to noisy observations and sampling, so that distilling information about the underlying manifold is a major challenge.
We propose a method for manifold learning that utilises a symmetric version of optimal transport with a quadratic regularisation.
We prove that the resulting kernel is consistent with a Laplace-type operator in the continuous limit, establish robustness to heteroskedastic noise and exhibit these results in simulations.
arXiv Detail & Related papers (2023-07-19T08:05:46Z) - Hyperbolic Vision Transformers: Combining Improvements in Metric
Learning [116.13290702262248]
We propose a new hyperbolic-based model for metric learning.
At the core of our method is a vision transformer with output embeddings mapped to hyperbolic space.
We evaluate the proposed model with six different formulations on four datasets.
arXiv Detail & Related papers (2022-03-21T09:48:23Z) - Measuring dissimilarity with diffeomorphism invariance [94.02751799024684]
We introduce DID, a pairwise dissimilarity measure applicable to a wide range of data spaces.
We prove that DID enjoys properties which make it relevant for theoretical study and practical use.
arXiv Detail & Related papers (2022-02-11T13:51:30Z) - Manifold Hypothesis in Data Analysis: Double Geometrically-Probabilistic
Approach to Manifold Dimension Estimation [92.81218653234669]
We present new approach to manifold hypothesis checking and underlying manifold dimension estimation.
Our geometrical method is a modification for sparse data of a well-known box-counting algorithm for Minkowski dimension calculation.
Experiments on real datasets show that the suggested approach based on two methods combination is powerful and effective.
arXiv Detail & Related papers (2021-07-08T15:35:54Z) - Deep regression on manifolds: a 3D rotation case study [0.0]
We show that a differentiable function mapping arbitrary inputs of a Euclidean space onto this manifold should satisfy to allow proper training.
We compare various differentiable mappings on the 3D rotation space, and conjecture about the importance of the local linearity of the mapping.
We notably show that a mapping based on Procrustes orthonormalization of a 3x3 matrix generally performs best among the ones considered.
arXiv Detail & Related papers (2021-03-30T13:07:36Z) - Switch Spaces: Learning Product Spaces with Sparse Gating [48.591045282317424]
We propose Switch Spaces, a data-driven approach for learning representations in product space.
We introduce sparse gating mechanisms that learn to choose, combine and switch spaces.
Experiments on knowledge graph completion and item recommendations show that the proposed switch space achieves new state-of-the-art performances.
arXiv Detail & Related papers (2021-02-17T11:06:59Z) - Disentangling by Subspace Diffusion [72.1895236605335]
We show that fully unsupervised factorization of a data manifold is possible if the true metric of the manifold is known.
Our work reduces the question of whether unsupervised metric learning is possible, providing a unifying insight into the geometric nature of representation learning.
arXiv Detail & Related papers (2020-06-23T13:33:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.