Combining Slow and Fast: Complementary Filtering for Dynamics Learning
- URL: http://arxiv.org/abs/2302.13754v2
- Date: Wed, 1 Mar 2023 14:29:48 GMT
- Title: Combining Slow and Fast: Complementary Filtering for Dynamics Learning
- Authors: Katharina Ensinger, Sebastian Ziesche, Barbara Rakitsch, Michael
Tiemann, Sebastian Trimpe
- Abstract summary: We propose a learning-based model learning approach to dynamics model learning.
We also propose a hybrid model that requires an additional physics-based simulator.
- Score: 9.11991227308599
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Modeling an unknown dynamical system is crucial in order to predict the
future behavior of the system. A standard approach is training recurrent models
on measurement data. While these models typically provide exact short-term
predictions, accumulating errors yield deteriorated long-term behavior. In
contrast, models with reliable long-term predictions can often be obtained,
either by training a robust but less detailed model, or by leveraging
physics-based simulations. In both cases, inaccuracies in the models yield a
lack of short-time details. Thus, different models with contrastive properties
on different time horizons are available. This observation immediately raises
the question: Can we obtain predictions that combine the best of both worlds?
Inspired by sensor fusion tasks, we interpret the problem in the frequency
domain and leverage classical methods from signal processing, in particular
complementary filters. This filtering technique combines two signals by
applying a high-pass filter to one signal, and low-pass filtering the other.
Essentially, the high-pass filter extracts high-frequencies, whereas the
low-pass filter extracts low frequencies. Applying this concept to dynamics
model learning enables the construction of models that yield accurate long- and
short-term predictions. Here, we propose two methods, one being purely
learning-based and the other one being a hybrid model that requires an
additional physics-based simulator.
Related papers
- Multi-Step Embed to Control: A Novel Deep Learning-based Approach for Surrogate Modelling in Reservoir Simulation [0.0]
Reduced-order models, also known as proxy model or surrogate model, are approximate models that are less computational expensive as opposed to fully descriptive models.
This paper introduces a deep learning-based surrogate model, referred as multi-step embed-to-control model, for the construction of proxy models with improved long-term prediction performance.
arXiv Detail & Related papers (2024-09-16T01:35:34Z) - A Multi-Graph Convolutional Neural Network Model for Short-Term Prediction of Turning Movements at Signalized Intersections [0.6215404942415159]
This study introduces a novel deep learning architecture, referred to as the multigraph convolution neural network (MGCNN) for turning movement prediction at intersections.
The proposed architecture combines a multigraph structure, built to model temporal variations in traffic data, with a spectral convolution operation to support modeling the spatial variations in traffic data over the graphs.
The model's ability to perform short-term predictions over 1, 2, 3, 4, and 5 minutes into the future was evaluated against four baseline state-of-the-art models.
arXiv Detail & Related papers (2024-06-02T05:41:25Z) - Stabilizing Machine Learning Prediction of Dynamics: Noise and
Noise-inspired Regularization [58.720142291102135]
Recent has shown that machine learning (ML) models can be trained to accurately forecast the dynamics of chaotic dynamical systems.
In the absence of mitigating techniques, this technique can result in artificially rapid error growth, leading to inaccurate predictions and/or climate instability.
We introduce Linearized Multi-Noise Training (LMNT), a regularization technique that deterministically approximates the effect of many small, independent noise realizations added to the model input during training.
arXiv Detail & Related papers (2022-11-09T23:40:52Z) - Predicting traffic signals on transportation networks using
spatio-temporal correlations on graphs [56.48498624951417]
This paper proposes a traffic propagation model that merges multiple heat diffusion kernels into a data-driven prediction model to forecast traffic signals.
We optimize the model parameters using Bayesian inference to minimize the prediction errors and, consequently, determine the mixing ratio of the two approaches.
The proposed model demonstrates prediction accuracy comparable to that of the state-of-the-art deep neural networks with lower computational effort.
arXiv Detail & Related papers (2021-04-27T18:17:42Z) - Anomaly Detection of Time Series with Smoothness-Inducing Sequential
Variational Auto-Encoder [59.69303945834122]
We present a Smoothness-Inducing Sequential Variational Auto-Encoder (SISVAE) model for robust estimation and anomaly detection of time series.
Our model parameterizes mean and variance for each time-stamp with flexible neural networks.
We show the effectiveness of our model on both synthetic datasets and public real-world benchmarks.
arXiv Detail & Related papers (2021-02-02T06:15:15Z) - A Novel Anomaly Detection Algorithm for Hybrid Production Systems based
on Deep Learning and Timed Automata [73.38551379469533]
DAD:DeepAnomalyDetection is a new approach for automatic model learning and anomaly detection in hybrid production systems.
It combines deep learning and timed automata for creating behavioral model from observations.
The algorithm has been applied to few data sets including two from real systems and has shown promising results.
arXiv Detail & Related papers (2020-10-29T08:27:43Z) - Goal-directed Generation of Discrete Structures with Conditional
Generative Models [85.51463588099556]
We introduce a novel approach to directly optimize a reinforcement learning objective, maximizing an expected reward.
We test our methodology on two tasks: generating molecules with user-defined properties and identifying short python expressions which evaluate to a given target value.
arXiv Detail & Related papers (2020-10-05T20:03:13Z) - Non-parametric generalized linear model [7.936841911281107]
A fundamental problem in statistical neuroscience is to model how neurons encode information by analyzing electrophysiological recordings.
A popular and widely-used approach is to fit the spike trains with an autoregressive point process model.
In practice a sufficiently rich but small ensemble of temporal basis functions needs to be chosen to parameterize the filters.
arXiv Detail & Related papers (2020-09-02T21:54:53Z) - Convolutional Tensor-Train LSTM for Spatio-temporal Learning [116.24172387469994]
We propose a higher-order LSTM model that can efficiently learn long-term correlations in the video sequence.
This is accomplished through a novel tensor train module that performs prediction by combining convolutional features across time.
Our results achieve state-of-the-art performance-art in a wide range of applications and datasets.
arXiv Detail & Related papers (2020-02-21T05:00:01Z) - Combining data assimilation and machine learning to emulate a dynamical
model from sparse and noisy observations: a case study with the Lorenz 96
model [0.0]
The method consists in applying iteratively a data assimilation step, here an ensemble Kalman filter, and a neural network.
Data assimilation is used to optimally combine a surrogate model with sparse data.
The output analysis is spatially complete and is used as a training set by the neural network to update the surrogate model.
Numerical experiments have been carried out using the chaotic 40-variables Lorenz 96 model, proving both convergence and statistical skill of the proposed hybrid approach.
arXiv Detail & Related papers (2020-01-06T12:26:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.