A Multi-step Loss Function for Robust Learning of the Dynamics in
Model-based Reinforcement Learning
- URL: http://arxiv.org/abs/2402.03146v1
- Date: Mon, 5 Feb 2024 16:13:00 GMT
- Title: A Multi-step Loss Function for Robust Learning of the Dynamics in
Model-based Reinforcement Learning
- Authors: Abdelhakim Benechehab, Albert Thomas, Giuseppe Paolo, Maurizio
Filippone and Bal\'azs K\'egl
- Abstract summary: In model-based reinforcement learning, most algorithms rely on simulating trajectories from one-step models of the dynamics learned on data.
We tackle this issue by using a multi-step objective to train one-step models.
We find that this new loss is particularly useful when the data is noisy, which is often the case in real-life environments.
- Score: 10.940666275830052
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In model-based reinforcement learning, most algorithms rely on simulating
trajectories from one-step models of the dynamics learned on data. A critical
challenge of this approach is the compounding of one-step prediction errors as
the length of the trajectory grows. In this paper we tackle this issue by using
a multi-step objective to train one-step models. Our objective is a weighted
sum of the mean squared error (MSE) loss at various future horizons. We find
that this new loss is particularly useful when the data is noisy (additive
Gaussian noise in the observations), which is often the case in real-life
environments. To support the multi-step loss, first we study its properties in
two tractable cases: i) uni-dimensional linear system, and ii) two-parameter
non-linear system. Second, we show in a variety of tasks (environments or
datasets) that the models learned with this loss achieve a significant
improvement in terms of the averaged R2-score on future prediction horizons.
Finally, in the pure batch reinforcement learning setting, we demonstrate that
one-step models serve as strong baselines when dynamics are deterministic,
while multi-step models would be more advantageous in the presence of noise,
highlighting the potential of our approach in real-world applications.
Related papers
- Towards Learning Stochastic Population Models by Gradient Descent [0.0]
We show that simultaneous estimation of parameters and structure poses major challenges for optimization procedures.
We demonstrate accurate estimation of models but find that enforcing the inference of parsimonious, interpretable models drastically increases the difficulty.
arXiv Detail & Related papers (2024-04-10T14:38:58Z) - Deep autoregressive density nets vs neural ensembles for model-based
offline reinforcement learning [2.9158689853305693]
We consider a model-based reinforcement learning algorithm that infers the system dynamics from the available data and performs policy optimization on imaginary model rollouts.
This approach is vulnerable to exploiting model errors which can lead to catastrophic failures on the real system.
We show that better performance can be obtained with a single well-calibrated autoregressive model on the D4RL benchmark.
arXiv Detail & Related papers (2024-02-05T10:18:15Z) - Multi-timestep models for Model-based Reinforcement Learning [10.940666275830052]
In model-based reinforcement learning (MBRL), most algorithms rely on simulating trajectories from one-step dynamics models learned on data.
We tackle this issue by using a multi-timestep objective to train one-step models.
We find that exponentially decaying weights lead to models that significantly improve the long-horizon R2 score.
arXiv Detail & Related papers (2023-10-09T12:42:39Z) - Theoretical Characterization of the Generalization Performance of
Overfitted Meta-Learning [70.52689048213398]
This paper studies the performance of overfitted meta-learning under a linear regression model with Gaussian features.
We find new and interesting properties that do not exist in single-task linear regression.
Our analysis suggests that benign overfitting is more significant and easier to observe when the noise and the diversity/fluctuation of the ground truth of each training task are large.
arXiv Detail & Related papers (2023-04-09T20:36:13Z) - Bayesian Active Learning for Discrete Latent Variable Models [19.852463786440122]
Active learning seeks to reduce the amount of data required to fit the parameters of a model.
latent variable models play a vital role in neuroscience, psychology, and a variety of other engineering and scientific disciplines.
arXiv Detail & Related papers (2022-02-27T19:07:12Z) - Learning continuous models for continuous physics [94.42705784823997]
We develop a test based on numerical analysis theory to validate machine learning models for science and engineering applications.
Our results illustrate how principled numerical analysis methods can be coupled with existing ML training/testing methodologies to validate models for science and engineering applications.
arXiv Detail & Related papers (2022-02-17T07:56:46Z) - Learning Dynamics from Noisy Measurements using Deep Learning with a
Runge-Kutta Constraint [9.36739413306697]
We discuss a methodology to learn differential equation(s) using noisy and sparsely sampled measurements.
In our methodology, the main innovation can be seen in of integration of deep neural networks with a classical numerical integration method.
arXiv Detail & Related papers (2021-09-23T15:43:45Z) - Sufficiently Accurate Model Learning for Planning [119.80502738709937]
This paper introduces the constrained Sufficiently Accurate model learning approach.
It provides examples of such problems, and presents a theorem on how close some approximate solutions can be.
The approximate solution quality will depend on the function parameterization, loss and constraint function smoothness, and the number of samples in model learning.
arXiv Detail & Related papers (2021-02-11T16:27:31Z) - Anomaly Detection of Time Series with Smoothness-Inducing Sequential
Variational Auto-Encoder [59.69303945834122]
We present a Smoothness-Inducing Sequential Variational Auto-Encoder (SISVAE) model for robust estimation and anomaly detection of time series.
Our model parameterizes mean and variance for each time-stamp with flexible neural networks.
We show the effectiveness of our model on both synthetic datasets and public real-world benchmarks.
arXiv Detail & Related papers (2021-02-02T06:15:15Z) - Model-Augmented Actor-Critic: Backpropagating through Paths [81.86992776864729]
Current model-based reinforcement learning approaches use the model simply as a learned black-box simulator.
We show how to make more effective use of the model by exploiting its differentiability.
arXiv Detail & Related papers (2020-05-16T19:18:10Z) - Convolutional Tensor-Train LSTM for Spatio-temporal Learning [116.24172387469994]
We propose a higher-order LSTM model that can efficiently learn long-term correlations in the video sequence.
This is accomplished through a novel tensor train module that performs prediction by combining convolutional features across time.
Our results achieve state-of-the-art performance-art in a wide range of applications and datasets.
arXiv Detail & Related papers (2020-02-21T05:00:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.