Model-Based Reinforcement Learning with SINDy
- URL: http://arxiv.org/abs/2208.14501v1
- Date: Tue, 30 Aug 2022 19:03:48 GMT
- Title: Model-Based Reinforcement Learning with SINDy
- Authors: Rushiv Arora, Bruno Castro da Silva, Eliot Moss
- Abstract summary: We propose a novel method for discovering the governing non-linear dynamics of physical systems in reinforcement learning (RL)
We establish that this method is capable of discovering the underlying dynamics using significantly fewer trajectories than state of the art model learning algorithms.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We draw on the latest advancements in the physics community to propose a
novel method for discovering the governing non-linear dynamics of physical
systems in reinforcement learning (RL). We establish that this method is
capable of discovering the underlying dynamics using significantly fewer
trajectories (as little as one rollout with $\leq 30$ time steps) than state of
the art model learning algorithms. Further, the technique learns a model that
is accurate enough to induce near-optimal policies given significantly fewer
trajectories than those required by model-free algorithms. It brings the
benefits of model-based RL without requiring a model to be developed in
advance, for systems that have physics-based dynamics.
To establish the validity and applicability of this algorithm, we conduct
experiments on four classic control tasks. We found that an optimal policy
trained on the discovered dynamics of the underlying system can generalize
well. Further, the learned policy performs well when deployed on the actual
physical system, thus bridging the model to real system gap. We further compare
our method to state-of-the-art model-based and model-free approaches, and show
that our method requires fewer trajectories sampled on the true physical system
compared other methods. Additionally, we explored approximate dynamics models
and found that they also can perform well.
Related papers
- Physics-Informed Model-Based Reinforcement Learning [19.01626581411011]
One of the drawbacks of traditional reinforcement learning algorithms is their poor sample efficiency.
We learn a model of the environment, essentially its transition dynamics and reward function, use it to generate imaginary trajectories and backpropagate through them to update the policy.
We show that, in model-based RL, model accuracy mainly matters in environments that are sensitive to initial conditions.
We also show that, in challenging environments, physics-informed model-based RL achieves better average-return than state-of-the-art model-free RL algorithms.
arXiv Detail & Related papers (2022-12-05T11:26:10Z) - Model Generation with Provable Coverability for Offline Reinforcement
Learning [14.333861814143718]
offline optimization with dynamics-aware policy provides a new perspective for policy learning and out-of-distribution generalization.
But due to the limitation under the offline setting, the learned model could not mimic real dynamics well enough to support reliable out-of-distribution exploration.
We propose an algorithm to generate models optimizing their coverage for the real dynamics.
arXiv Detail & Related papers (2022-06-01T08:34:09Z) - Gradient-Based Trajectory Optimization With Learned Dynamics [80.41791191022139]
We use machine learning techniques to learn a differentiable dynamics model of the system from data.
We show that a neural network can model highly nonlinear behaviors accurately for large time horizons.
In our hardware experiments, we demonstrate that our learned model can represent complex dynamics for both the Spot and Radio-controlled (RC) car.
arXiv Detail & Related papers (2022-04-09T22:07:34Z) - Constructing Neural Network-Based Models for Simulating Dynamical
Systems [59.0861954179401]
Data-driven modeling is an alternative paradigm that seeks to learn an approximation of the dynamics of a system using observations of the true system.
This paper provides a survey of the different ways to construct models of dynamical systems using neural networks.
In addition to the basic overview, we review the related literature and outline the most significant challenges from numerical simulations that this modeling paradigm must overcome.
arXiv Detail & Related papers (2021-11-02T10:51:42Z) - Model-free and Bayesian Ensembling Model-based Deep Reinforcement
Learning for Particle Accelerator Control Demonstrated on the FERMI FEL [0.0]
This paper shows how reinforcement learning can be used on an operational level on accelerator physics problems.
We compare purely model-based to model-free reinforcement learning applied to the intensity optimisation on the FERMI FEL system.
We find that the model-based approach demonstrates higher representational power and sample-efficiency, while the performance of the model-free method is slightly superior.
arXiv Detail & Related papers (2020-12-17T16:57:27Z) - Model-Based Deep Learning [155.063817656602]
Signal processing, communications, and control have traditionally relied on classical statistical modeling techniques.
Deep neural networks (DNNs) use generic architectures which learn to operate from data, and demonstrate excellent performance.
We are interested in hybrid techniques that combine principled mathematical models with data-driven systems to benefit from the advantages of both approaches.
arXiv Detail & Related papers (2020-12-15T16:29:49Z) - Bridging Imagination and Reality for Model-Based Deep Reinforcement
Learning [72.18725551199842]
We propose a novel model-based reinforcement learning algorithm, called BrIdging Reality and Dream (BIRD)
It maximizes the mutual information between imaginary and real trajectories so that the policy improvement learned from imaginary trajectories can be easily generalized to real trajectories.
We demonstrate that our approach improves sample efficiency of model-based planning, and achieves state-of-the-art performance on challenging visual control benchmarks.
arXiv Detail & Related papers (2020-10-23T03:22:01Z) - Model-Augmented Actor-Critic: Backpropagating through Paths [81.86992776864729]
Current model-based reinforcement learning approaches use the model simply as a learned black-box simulator.
We show how to make more effective use of the model by exploiting its differentiability.
arXiv Detail & Related papers (2020-05-16T19:18:10Z) - Information Theoretic Model Predictive Q-Learning [64.74041985237105]
We present a novel theoretical connection between information theoretic MPC and entropy regularized RL.
We develop a Q-learning algorithm that can leverage biased models.
arXiv Detail & Related papers (2019-12-31T00:29:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.