Learning Constrained Dynamics with Gauss Principle adhering Gaussian
Processes
- URL: http://arxiv.org/abs/2004.11238v1
- Date: Thu, 23 Apr 2020 15:26:51 GMT
- Title: Learning Constrained Dynamics with Gauss Principle adhering Gaussian
Processes
- Authors: A. Rene Geist and Sebastian Trimpe
- Abstract summary: We propose to combine insights from analytical mechanics with Gaussian process regression to improve the model's data efficiency and constraint integrity.
Our model enables to infer the acceleration of the unconstrained system from data of the constrained system.
- Score: 7.643999306446022
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The identification of the constrained dynamics of mechanical systems is often
challenging. Learning methods promise to ease an analytical analysis, but
require considerable amounts of data for training. We propose to combine
insights from analytical mechanics with Gaussian process regression to improve
the model's data efficiency and constraint integrity. The result is a Gaussian
process model that incorporates a priori constraint knowledge such that its
predictions adhere to Gauss' principle of least constraint. In return,
predictions of the system's acceleration naturally respect potentially
non-ideal (non-)holonomic equality constraints. As corollary results, our model
enables to infer the acceleration of the unconstrained system from data of the
constrained system and enables knowledge transfer between differing constraint
configurations.
Related papers
- No Equations Needed: Learning System Dynamics Without Relying on Closed-Form ODEs [56.78271181959529]
This paper proposes a conceptual shift to modeling low-dimensional dynamical systems by departing from the traditional two-step modeling process.
Instead of first discovering a closed-form equation and then analyzing it, our approach, direct semantic modeling, predicts the semantic representation of the dynamical system.
Our approach not only simplifies the modeling pipeline but also enhances the transparency and flexibility of the resulting models.
arXiv Detail & Related papers (2025-01-30T18:36:48Z) - CGNSDE: Conditional Gaussian Neural Stochastic Differential Equation for Modeling Complex Systems and Data Assimilation [1.4322470793889193]
A new knowledge-based and machine learning hybrid modeling approach, called conditional neural differential equation (CGNSDE), is developed.
In contrast to the standard neural network predictive models, the CGNSDE is designed to effectively tackle both forward prediction tasks and inverse state estimation problems.
arXiv Detail & Related papers (2024-04-10T05:32:03Z) - Physics-Informed Kernel Embeddings: Integrating Prior System Knowledge
with Data-Driven Control [22.549914935697366]
We present a method to incorporate priori knowledge into data-driven control algorithms using kernel embeddings.
Our proposed approach incorporates prior knowledge of the system dynamics as a bias term in the kernel learning problem.
We demonstrate the improved sample efficiency and out-of-sample generalization of our approach over a purely data-driven baseline.
arXiv Detail & Related papers (2023-01-09T18:35:32Z) - Structure-Preserving Learning Using Gaussian Processes and Variational
Integrators [62.31425348954686]
We propose the combination of a variational integrator for the nominal dynamics of a mechanical system and learning residual dynamics with Gaussian process regression.
We extend our approach to systems with known kinematic constraints and provide formal bounds on the prediction uncertainty.
arXiv Detail & Related papers (2021-12-10T11:09:29Z) - Using Data Assimilation to Train a Hybrid Forecast System that Combines
Machine-Learning and Knowledge-Based Components [52.77024349608834]
We consider the problem of data-assisted forecasting of chaotic dynamical systems when the available data is noisy partial measurements.
We show that by using partial measurements of the state of the dynamical system, we can train a machine learning model to improve predictions made by an imperfect knowledge-based model.
arXiv Detail & Related papers (2021-02-15T19:56:48Z) - Gaussian Process-based Min-norm Stabilizing Controller for
Control-Affine Systems with Uncertain Input Effects and Dynamics [90.81186513537777]
We propose a novel compound kernel that captures the control-affine nature of the problem.
We show that this resulting optimization problem is convex, and we call it Gaussian Process-based Control Lyapunov Function Second-Order Cone Program (GP-CLF-SOCP)
arXiv Detail & Related papers (2020-11-14T01:27:32Z) - Learning Stable Nonparametric Dynamical Systems with Gaussian Process
Regression [9.126353101382607]
We learn a nonparametric Lyapunov function based on Gaussian process regression from data.
We prove that stabilization of the nominal model based on the nonparametric control Lyapunov function does not modify the behavior of the nominal model at training samples.
arXiv Detail & Related papers (2020-06-14T11:17:17Z) - Multiplicative noise and heavy tails in stochastic optimization [62.993432503309485]
empirical optimization is central to modern machine learning, but its role in its success is still unclear.
We show that it commonly arises in parameters of discrete multiplicative noise due to variance.
A detailed analysis is conducted in which we describe on key factors, including recent step size, and data, all exhibit similar results on state-of-the-art neural network models.
arXiv Detail & Related papers (2020-06-11T09:58:01Z) - Physics Informed Deep Kernel Learning [24.033468062984458]
Physics Informed Deep Kernel Learning (PI-DKL) exploits physics knowledge represented by differential equations with latent sources.
For efficient and effective inference, we marginalize out the latent variables and derive a collapsed model evidence lower bound (ELBO)
arXiv Detail & Related papers (2020-06-08T22:43:31Z) - On dissipative symplectic integration with applications to
gradient-based optimization [77.34726150561087]
We propose a geometric framework in which discretizations can be realized systematically.
We show that a generalization of symplectic to nonconservative and in particular dissipative Hamiltonian systems is able to preserve rates of convergence up to a controlled error.
arXiv Detail & Related papers (2020-04-15T00:36:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.