Learning Hybrid Dynamics Models With Simulator-Informed Latent States
- URL: http://arxiv.org/abs/2309.02873v2
- Date: Mon, 29 Jan 2024 20:25:52 GMT
- Title: Learning Hybrid Dynamics Models With Simulator-Informed Latent States
- Authors: Katharina Ensinger, Sebastian Ziesche, Sebastian Trimpe
- Abstract summary: We propose a new approach to hybrid modeling, where we inform the latent states of a learned model via a simulator.
This allows to control the predictions via the simulator preventing them from accumulating errors.
In our learning-based setting, we jointly learn the dynamics and an observer that infers the latent states via the simulator.
- Score: 7.801959219897031
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Dynamics model learning deals with the task of inferring unknown dynamics
from measurement data and predicting the future behavior of the system. A
typical approach to address this problem is to train recurrent models. However,
predictions with these models are often not physically meaningful. Further,
they suffer from deteriorated behavior over time due to accumulating errors.
Often, simulators building on first principles are available being physically
meaningful by design. However, modeling simplifications typically cause
inaccuracies in these models. Consequently, hybrid modeling is an emerging
trend that aims to combine the best of both worlds. In this paper, we propose a
new approach to hybrid modeling, where we inform the latent states of a learned
model via a black-box simulator. This allows to control the predictions via the
simulator preventing them from accumulating errors. This is especially
challenging since, in contrast to previous approaches, access to the
simulator's latent states is not available. We tackle the task by leveraging
observers, a well-known concept from control theory, inferring unknown latent
states from observations and dynamics over time. In our learning-based setting,
we jointly learn the dynamics and an observer that infers the latent states via
the simulator. Thus, the simulator constantly corrects the latent states,
compensating for modeling mismatch caused by learning. To maintain flexibility,
we train an RNN-based residuum for the latent states that cannot be informed by
the simulator.
Related papers
- Stabilizing Machine Learning Prediction of Dynamics: Noise and
Noise-inspired Regularization [58.720142291102135]
Recent has shown that machine learning (ML) models can be trained to accurately forecast the dynamics of chaotic dynamical systems.
In the absence of mitigating techniques, this technique can result in artificially rapid error growth, leading to inaccurate predictions and/or climate instability.
We introduce Linearized Multi-Noise Training (LMNT), a regularization technique that deterministically approximates the effect of many small, independent noise realizations added to the model input during training.
arXiv Detail & Related papers (2022-11-09T23:40:52Z) - Continual learning autoencoder training for a particle-in-cell
simulation via streaming [52.77024349608834]
upcoming exascale era will provide a new generation of physics simulations with high resolution.
These simulations will have a high resolution, which will impact the training of machine learning models since storing a high amount of simulation data on disk is nearly impossible.
This work presents an approach that trains a neural network concurrently to a running simulation without data on a disk.
arXiv Detail & Related papers (2022-11-09T09:55:14Z) - Real-to-Sim: Predicting Residual Errors of Robotic Systems with Sparse
Data using a Learning-based Unscented Kalman Filter [65.93205328894608]
We learn the residual errors between a dynamic and/or simulator model and the real robot.
We show that with the learned residual errors, we can further close the reality gap between dynamic models, simulations, and actual hardware.
arXiv Detail & Related papers (2022-09-07T15:15:12Z) - Simulating Liquids with Graph Networks [25.013244956897832]
We investigate graph neural networks (GNNs) for learning fluid dynamics.
Our results indicate that learning models, such as GNNs, fail to learn the exact underlying dynamics unless the training set is devoid of any other problem-specific correlations.
arXiv Detail & Related papers (2022-03-14T15:39:27Z) - Learning continuous models for continuous physics [94.42705784823997]
We develop a test based on numerical analysis theory to validate machine learning models for science and engineering applications.
Our results illustrate how principled numerical analysis methods can be coupled with existing ML training/testing methodologies to validate models for science and engineering applications.
arXiv Detail & Related papers (2022-02-17T07:56:46Z) - Automated Dissipation Control for Turbulence Simulation with Shell
Models [1.675857332621569]
The application of machine learning (ML) techniques, especially neural networks, has seen tremendous success at processing images and language.
In this work we construct a strongly simplified representation of turbulence by using the Gledzer-Ohkitani-Yamada shell model.
We propose an approach that aims to reconstruct statistical properties of turbulence such as the self-similar inertial-range scaling.
arXiv Detail & Related papers (2022-01-07T15:03:52Z) - Constraint-based graph network simulator [9.462808515258464]
We present a framework for constraint-based learned simulation.
We implement our method using a graph neural network as the constraint function and gradient descent as the constraint solver.
Our model achieves better or comparable performance to top learned simulators.
arXiv Detail & Related papers (2021-12-16T19:15:11Z) - Likelihood-Free Inference in State-Space Models with Unknown Dynamics [71.94716503075645]
We introduce a method for inferring and predicting latent states in state-space models where observations can only be simulated, and transition dynamics are unknown.
We propose a way of doing likelihood-free inference (LFI) of states and state prediction with a limited number of simulations.
arXiv Detail & Related papers (2021-11-02T12:33:42Z) - Simulated Adversarial Testing of Face Recognition Models [53.10078734154151]
We propose a framework for learning how to test machine learning algorithms using simulators in an adversarial manner.
We are the first to show that weaknesses of models trained on real data can be discovered using simulated samples.
arXiv Detail & Related papers (2021-06-08T17:58:10Z) - Transfer learning suppresses simulation bias in predictive models built
from sparse, multi-modal data [15.587831925516957]
Many problems in science, engineering, and business require making predictions based on very few observations.
To build a robust predictive model, these sparse data may need to be augmented with simulated data, especially when the design space is multidimensional.
We combine recent developments in deep learning to build more robust predictive models from multimodal data.
arXiv Detail & Related papers (2021-04-19T23:28:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.