Learning Riemannian Stable Dynamical Systems via Diffeomorphisms
- URL: http://arxiv.org/abs/2211.03169v1
- Date: Sun, 6 Nov 2022 16:28:45 GMT
- Title: Learning Riemannian Stable Dynamical Systems via Diffeomorphisms
- Authors: Jiechao Zhang, Hadi Beik-Mohammadi, Leonel Rozo
- Abstract summary: Dexterous and autonomous robots should be capable of executing elaborated dynamical motions skillfully.
Learning techniques may be leveraged to build models of such dynamic skills.
To accomplish this, the learning model needs to encode a stable vector field that resembles the desired motion dynamics.
- Score: 0.23204178451683263
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Dexterous and autonomous robots should be capable of executing elaborated
dynamical motions skillfully. Learning techniques may be leveraged to build
models of such dynamic skills. To accomplish this, the learning model needs to
encode a stable vector field that resembles the desired motion dynamics. This
is challenging as the robot state does not evolve on a Euclidean space, and
therefore the stability guarantees and vector field encoding need to account
for the geometry arising from, for example, the orientation representation. To
tackle this problem, we propose learning Riemannian stable dynamical systems
(RSDS) from demonstrations, allowing us to account for different geometric
constraints resulting from the dynamical system state representation. Our
approach provides Lyapunov-stability guarantees on Riemannian manifolds that
are enforced on the desired motion dynamics via diffeomorphisms built on neural
manifold ODEs. We show that our Riemannian approach makes it possible to learn
stable dynamical systems displaying complicated vector fields on both
illustrative examples and real-world manipulation tasks, where Euclidean
approximations fail.
Related papers
- Projected Neural Differential Equations for Learning Constrained Dynamics [3.570367665112327]
We introduce a new method for constraining neural differential equations based on projection of the learned vector field to the tangent space of the constraint manifold.
PNDEs outperform existing methods while requiring fewer hyper parameters.
The proposed approach demonstrates significant potential for enhancing the modeling of constrained dynamical systems.
arXiv Detail & Related papers (2024-10-31T06:32:43Z) - Mamba-FSCIL: Dynamic Adaptation with Selective State Space Model for Few-Shot Class-Incremental Learning [113.89327264634984]
Few-shot class-incremental learning (FSCIL) confronts the challenge of integrating new classes into a model with minimal training samples.
Traditional methods widely adopt static adaptation relying on a fixed parameter space to learn from data that arrive sequentially.
We propose a dual selective SSM projector that dynamically adjusts the projection parameters based on the intermediate features for dynamic adaptation.
arXiv Detail & Related papers (2024-07-08T17:09:39Z) - Deep Learning for Koopman-based Dynamic Movement Primitives [0.0]
We propose a novel approach by joining the theories of Koopman Operators and Dynamic Movement Primitives to Learning from Demonstration.
Our approach, named glsadmd, projects nonlinear dynamical systems into linear latent spaces such that a solution reproduces the desired complex motion.
Our results are comparable to the Extended Dynamic Mode Decomposition on the LASA Handwriting dataset but with training on only a small fractions of the letters.
arXiv Detail & Related papers (2023-12-06T07:33:22Z) - Learning minimal representations of stochastic processes with
variational autoencoders [52.99137594502433]
We introduce an unsupervised machine learning approach to determine the minimal set of parameters required to describe a process.
Our approach enables for the autonomous discovery of unknown parameters describing processes.
arXiv Detail & Related papers (2023-07-21T14:25:06Z) - Physics-guided Deep Markov Models for Learning Nonlinear Dynamical
Systems with Uncertainty [6.151348127802708]
We propose a physics-guided framework, termed Physics-guided Deep Markov Model (PgDMM)
The proposed framework takes advantage of the expressive power of deep learning, while retaining the driving physics of the dynamical system.
arXiv Detail & Related papers (2021-10-16T16:35:12Z) - GEM: Group Enhanced Model for Learning Dynamical Control Systems [78.56159072162103]
We build effective dynamical models that are amenable to sample-based learning.
We show that learning the dynamics on a Lie algebra vector space is more effective than learning a direct state transition model.
This work sheds light on a connection between learning of dynamics and Lie group properties, which opens doors for new research directions.
arXiv Detail & Related papers (2021-04-07T01:08:18Z) - Physics-Integrated Variational Autoencoders for Robust and Interpretable
Generative Modeling [86.9726984929758]
We focus on the integration of incomplete physics models into deep generative models.
We propose a VAE architecture in which a part of the latent space is grounded by physics.
We demonstrate generative performance improvements over a set of synthetic and real-world datasets.
arXiv Detail & Related papers (2021-02-25T20:28:52Z) - ImitationFlow: Learning Deep Stable Stochastic Dynamic Systems by
Normalizing Flows [29.310742141970394]
We introduce ImitationFlow, a novel Deep generative model that allows learning complex globally stable, nonlinear dynamics.
We show the effectiveness of our method with both standard datasets and a real robot experiment.
arXiv Detail & Related papers (2020-10-25T14:49:46Z) - Euclideanizing Flows: Diffeomorphic Reduction for Learning Stable
Dynamical Systems [74.80320120264459]
We present an approach to learn such motions from a limited number of human demonstrations.
The complex motions are encoded as rollouts of a stable dynamical system.
The efficacy of this approach is demonstrated through validation on an established benchmark as well demonstrations collected on a real-world robotic system.
arXiv Detail & Related papers (2020-05-27T03:51:57Z) - Learning Stable Deep Dynamics Models [91.90131512825504]
We propose an approach for learning dynamical systems that are guaranteed to be stable over the entire state space.
We show that such learning systems are able to model simple dynamical systems and can be combined with additional deep generative models to learn complex dynamics.
arXiv Detail & Related papers (2020-01-17T00:04:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.