Neural Contractive Dynamical Systems
- URL: http://arxiv.org/abs/2401.09352v1
- Date: Wed, 17 Jan 2024 17:18:21 GMT
- Title: Neural Contractive Dynamical Systems
- Authors: Hadi Beik-Mohammadi, S{\o}ren Hauberg, Georgios Arvanitidis, Nadia
Figueroa, Gerhard Neumann, and Leonel Rozo
- Abstract summary: Stability guarantees are crucial when ensuring a fully autonomous robot does not take undesirable or potentially harmful actions.
We propose a novel methodology to learn neural contractive dynamical systems, where our neural architecture ensures contraction.
We show that our approach encodes the desired dynamics more accurately than the current state-of-the-art, which provides less strong stability guarantees.
- Score: 13.046426079291376
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Stability guarantees are crucial when ensuring a fully autonomous robot does
not take undesirable or potentially harmful actions. Unfortunately, global
stability guarantees are hard to provide in dynamical systems learned from
data, especially when the learned dynamics are governed by neural networks. We
propose a novel methodology to learn neural contractive dynamical systems,
where our neural architecture ensures contraction, and hence, global stability.
To efficiently scale the method to high-dimensional dynamical systems, we
develop a variant of the variational autoencoder that learns dynamics in a
low-dimensional latent representation space while retaining contractive
stability after decoding. We further extend our approach to learning
contractive systems on the Lie group of rotations to account for full-pose
end-effector dynamic motions. The result is the first highly flexible learning
architecture that provides contractive stability guarantees with capability to
perform obstacle avoidance. Empirically, we demonstrate that our approach
encodes the desired dynamics more accurately than the current state-of-the-art,
which provides less strong stability guarantees.
Related papers
- Learning Deep Dissipative Dynamics [5.862431328401459]
Dissipativity is a crucial indicator for dynamical systems that generalizes stability and input-output stability.
We propose a differentiable projection that transforms any dynamics represented by neural networks into dissipative ones.
Our method strictly guarantees stability, input-output stability, and energy conservation of trained dynamical systems.
arXiv Detail & Related papers (2024-08-21T09:44:43Z) - Incorporating Neuro-Inspired Adaptability for Continual Learning in
Artificial Intelligence [59.11038175596807]
Continual learning aims to empower artificial intelligence with strong adaptability to the real world.
Existing advances mainly focus on preserving memory stability to overcome catastrophic forgetting.
We propose a generic approach that appropriately attenuates old memories in parameter distributions to improve learning plasticity.
arXiv Detail & Related papers (2023-08-29T02:43:58Z) - Data-Driven Control with Inherent Lyapunov Stability [3.695480271934742]
We propose Control with Inherent Lyapunov Stability (CoILS) as a method for jointly learning parametric representations of a nonlinear dynamics model and a stabilizing controller from data.
In addition to the stabilizability of the learned dynamics guaranteed by our novel construction, we show that the learned controller stabilizes the true dynamics under certain assumptions on the fidelity of the learned dynamics.
arXiv Detail & Related papers (2023-03-06T14:21:42Z) - ConCerNet: A Contrastive Learning Based Framework for Automated
Conservation Law Discovery and Trustworthy Dynamical System Prediction [82.81767856234956]
This paper proposes a new learning framework named ConCerNet to improve the trustworthiness of the DNN based dynamics modeling.
We show that our method consistently outperforms the baseline neural networks in both coordinate error and conservation metrics.
arXiv Detail & Related papers (2023-02-11T21:07:30Z) - On Robust Numerical Solver for ODE via Self-Attention Mechanism [82.95493796476767]
We explore training efficient and robust AI-enhanced numerical solvers with a small data size by mitigating intrinsic noise disturbances.
We first analyze the ability of the self-attention mechanism to regulate noise in supervised learning and then propose a simple-yet-effective numerical solver, Attr, which introduces an additive self-attention mechanism to the numerical solution of differential equations.
arXiv Detail & Related papers (2023-02-05T01:39:21Z) - Learning Stabilizable Deep Dynamics Models [1.75320459412718]
We propose a new method for learning the dynamics of input-affine control systems.
An important feature is that a stabilizing controller and control Lyapunov function of the learned model are obtained as well.
The proposed method can also be applied to solving Hamilton-Jacobi inequalities.
arXiv Detail & Related papers (2022-03-18T03:09:24Z) - Recurrent Neural Network Controllers Synthesis with Stability Guarantees
for Partially Observed Systems [6.234005265019845]
We consider the important class of recurrent neural networks (RNN) as dynamic controllers for nonlinear uncertain partially-observed systems.
We propose a projected policy gradient method that iteratively enforces the stability conditions in the reparametrized space.
Numerical experiments show that our method learns stabilizing controllers while using fewer samples and achieving higher final performance compared with policy gradient.
arXiv Detail & Related papers (2021-09-08T18:21:56Z) - Safe Active Dynamics Learning and Control: A Sequential
Exploration-Exploitation Framework [30.58186749790728]
We propose a theoretically-justified approach to maintaining safety in the presence of dynamics uncertainty.
Our framework guarantees the high-probability satisfaction of all constraints at all times jointly.
This theoretical analysis also motivates two regularizers of last-layer meta-learning models that improve online adaptation capabilities.
arXiv Detail & Related papers (2020-08-26T17:39:58Z) - Limited-angle tomographic reconstruction of dense layered objects by
dynamical machine learning [68.9515120904028]
Limited-angle tomography of strongly scattering quasi-transparent objects is a challenging, highly ill-posed problem.
Regularizing priors are necessary to reduce artifacts by improving the condition of such problems.
We devised a recurrent neural network (RNN) architecture with a novel split-convolutional gated recurrent unit (SC-GRU) as the building block.
arXiv Detail & Related papers (2020-07-21T11:48:22Z) - Euclideanizing Flows: Diffeomorphic Reduction for Learning Stable
Dynamical Systems [74.80320120264459]
We present an approach to learn such motions from a limited number of human demonstrations.
The complex motions are encoded as rollouts of a stable dynamical system.
The efficacy of this approach is demonstrated through validation on an established benchmark as well demonstrations collected on a real-world robotic system.
arXiv Detail & Related papers (2020-05-27T03:51:57Z) - Learning Stable Deep Dynamics Models [91.90131512825504]
We propose an approach for learning dynamical systems that are guaranteed to be stable over the entire state space.
We show that such learning systems are able to model simple dynamical systems and can be combined with additional deep generative models to learn complex dynamics.
arXiv Detail & Related papers (2020-01-17T00:04:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.