Reinforcement Learning in System Identification
- URL: http://arxiv.org/abs/2212.07123v1
- Date: Wed, 14 Dec 2022 09:20:42 GMT
- Title: Reinforcement Learning in System Identification
- Authors: Jose Antonio Martin H., Oscar Fernandez Vicente, Sergio Perez, Anas
Belfadil, Cristina Ibanez-Llano, Freddy Jose Perozo Rondon, Jose Javier
Valle, Javier Arechalde Pelaz
- Abstract summary: System identification, also known as learning forward models, transfer functions, system dynamics, etc., has a long tradition both in science and engineering.
Here we explore the use of Reinforcement Learning in this problem.
We elaborate on why and how this problem fits naturally and sound as a Reinforcement Learning problem, and present some experimental results that demonstrate RL is a promising technique to solve these kind of problems.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: System identification, also known as learning forward models, transfer
functions, system dynamics, etc., has a long tradition both in science and
engineering in different fields. Particularly, it is a recurring theme in
Reinforcement Learning research, where forward models approximate the state
transition function of a Markov Decision Process by learning a mapping function
from current state and action to the next state. This problem is commonly
defined as a Supervised Learning problem in a direct way. This common approach
faces several difficulties due to the inherent complexities of the dynamics to
learn, for example, delayed effects, high non-linearity, non-stationarity,
partial observability and, more important, error accumulation when using
bootstrapped predictions (predictions based on past predictions), over large
time horizons. Here we explore the use of Reinforcement Learning in this
problem. We elaborate on why and how this problem fits naturally and sound as a
Reinforcement Learning problem, and present some experimental results that
demonstrate RL is a promising technique to solve these kind of problems.
Related papers
- Model-Based Reinforcement Learning Control of Reaction-Diffusion
Problems [0.0]
reinforcement learning has been applied to decision-making in several applications, most notably in games.
We introduce two novel reward functions to drive the flow of the transported field.
Results show that certain controls can be implemented successfully in these applications.
arXiv Detail & Related papers (2024-02-22T11:06:07Z) - Multi-modal Causal Structure Learning and Root Cause Analysis [67.67578590390907]
We propose Mulan, a unified multi-modal causal structure learning method for root cause localization.
We leverage a log-tailored language model to facilitate log representation learning, converting log sequences into time-series data.
We also introduce a novel key performance indicator-aware attention mechanism for assessing modality reliability and co-learning a final causal graph.
arXiv Detail & Related papers (2024-02-04T05:50:38Z) - Contrastive Example-Based Control [163.6482792040079]
We propose a method for offline, example-based control that learns an implicit model of multi-step transitions, rather than a reward function.
Across a range of state-based and image-based offline control tasks, our method outperforms baselines that use learned reward functions.
arXiv Detail & Related papers (2023-07-24T19:43:22Z) - Bayesian Learning for Dynamic Inference [2.2843885788439793]
In some sequential estimation problems, the future values of the quantity to be estimated depend on the estimate of its current value.
We formulate the Bayesian learning problem for dynamic inference, where the unknown quantity-generation model is assumed to be randomly drawn.
We derive the optimal Bayesian learning rules, both offline and online, to minimize the inference loss.
arXiv Detail & Related papers (2022-12-30T19:16:23Z) - Learning Physical Concepts in Cyber-Physical Systems: A Case Study [72.74318982275052]
We provide an overview of the current state of research regarding methods for learning physical concepts in time series data.
We also analyze the most important methods from the current state of the art using the example of a three-tank system.
arXiv Detail & Related papers (2021-11-28T14:24:52Z) - Learning Stable Deep Dynamics Models for Partially Observed or Delayed
Dynamical Systems [38.17499046781131]
For safety critical systems, it is crucial that the learned model is guaranteed to converge to some equilibrium point.
neural ODEs regularized with neural Lyapunov functions are a promising approach when states are fully observed.
We show how to ensure stability of the learned models, and theoretically analyze our approach.
arXiv Detail & Related papers (2021-10-27T09:21:59Z) - Supervised DKRC with Images for Offline System Identification [77.34726150561087]
Modern dynamical systems are becoming increasingly non-linear and complex.
There is a need for a framework to model these systems in a compact and comprehensive representation for prediction and control.
Our approach learns these basis functions using a supervised learning approach.
arXiv Detail & Related papers (2021-09-06T04:39:06Z) - Learning Temporal Dynamics from Cycles in Narrated Video [85.89096034281694]
We propose a self-supervised solution to the problem of learning to model how the world changes as time elapses.
Our model learns modality-agnostic functions to predict forward and backward in time, which must undo each other when composed.
We apply the learned dynamics model without further training to various tasks, such as predicting future action and temporally ordering sets of images.
arXiv Detail & Related papers (2021-01-07T02:41:32Z) - Knowledge as Invariance -- History and Perspectives of
Knowledge-augmented Machine Learning [69.99522650448213]
Research in machine learning is at a turning point.
Research interests are shifting away from increasing the performance of highly parameterized models to exceedingly specific tasks.
This white paper provides an introduction and discussion of this emerging field in machine learning research.
arXiv Detail & Related papers (2020-12-21T15:07:19Z) - Variational Deep Learning for the Identification and Reconstruction of
Chaotic and Stochastic Dynamical Systems from Noisy and Partial Observations [15.82296284460491]
The identification of governing equations remains challenging when dealing with noisy and partial observations.
Within the proposed framework, we learn an inference model to reconstruct the true states of the system.
This framework bridges classical data assimilation and state-of-the-art machine learning techniques.
arXiv Detail & Related papers (2020-09-04T16:48:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.