Model Predictive Actor-Critic: Accelerating Robot Skill Acquisition with
Deep Reinforcement Learning
- URL: http://arxiv.org/abs/2103.13842v1
- Date: Thu, 25 Mar 2021 13:50:24 GMT
- Title: Model Predictive Actor-Critic: Accelerating Robot Skill Acquisition with
Deep Reinforcement Learning
- Authors: Andrew S. Morgan, Daljeet Nandha, Georgia Chalvatzaki, Carlo D'Eramo,
Aaron M. Dollar, and Jan Peters
- Abstract summary: Model Predictive Actor-Critic (MoPAC) is a hybrid model-based/model-free method that combines model predictive rollouts with policy optimization as to mitigate model bias.
MoPAC guarantees optimal skill learning up to an approximation error and reduces necessary physical interaction with the environment.
- Score: 42.525696463089794
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Substantial advancements to model-based reinforcement learning algorithms
have been impeded by the model-bias induced by the collected data, which
generally hurts performance. Meanwhile, their inherent sample efficiency
warrants utility for most robot applications, limiting potential damage to the
robot and its environment during training. Inspired by information theoretic
model predictive control and advances in deep reinforcement learning, we
introduce Model Predictive Actor-Critic (MoPAC), a hybrid
model-based/model-free method that combines model predictive rollouts with
policy optimization as to mitigate model bias. MoPAC leverages optimal
trajectories to guide policy learning, but explores via its model-free method,
allowing the algorithm to learn more expressive dynamics models. This
combination guarantees optimal skill learning up to an approximation error and
reduces necessary physical interaction with the environment, making it suitable
for real-robot training. We provide extensive results showcasing how our
proposed method generally outperforms current state-of-the-art and conclude by
evaluating MoPAC for learning on a physical robotic hand performing valve
rotation and finger gaiting--a task that requires grasping, manipulation, and
then regrasping of an object.
Related papers
- Learning Low-Dimensional Strain Models of Soft Robots by Looking at the Evolution of Their Shape with Application to Model-Based Control [2.058941610795796]
This paper introduces a streamlined method for learning low-dimensional, physics-based models.
We validate our approach through simulations with various planar soft manipulators.
Thanks to the capability of the method of generating physically compatible models, the learned models can be straightforwardly combined with model-based control policies.
arXiv Detail & Related papers (2024-10-31T18:37:22Z) - Model-based Policy Optimization using Symbolic World Model [46.42871544295734]
The application of learning-based control methods in robotics presents significant challenges.
One is that model-free reinforcement learning algorithms use observation data with low sample efficiency.
We suggest approximating transition dynamics with symbolic expressions, which are generated via symbolic regression.
arXiv Detail & Related papers (2024-07-18T13:49:21Z) - Active Exploration in Bayesian Model-based Reinforcement Learning for Robot Manipulation [8.940998315746684]
We propose a model-based reinforcement learning (RL) approach for robotic arm end-tasks.
We employ Bayesian neural network models to represent, in a probabilistic way, both the belief and information encoded in the dynamic model during exploration.
Our experiments show the advantages of our Bayesian model-based RL approach, with similar quality in the results than relevant alternatives.
arXiv Detail & Related papers (2024-04-02T11:44:37Z) - STORM: Efficient Stochastic Transformer based World Models for
Reinforcement Learning [82.03481509373037]
Recently, model-based reinforcement learning algorithms have demonstrated remarkable efficacy in visual input environments.
We introduce Transformer-based wORld Model (STORM), an efficient world model architecture that combines strong modeling and generation capabilities.
Storm achieves a mean human performance of $126.7%$ on the Atari $100$k benchmark, setting a new record among state-of-the-art methods.
arXiv Detail & Related papers (2023-10-14T16:42:02Z) - Real-to-Sim: Predicting Residual Errors of Robotic Systems with Sparse
Data using a Learning-based Unscented Kalman Filter [65.93205328894608]
We learn the residual errors between a dynamic and/or simulator model and the real robot.
We show that with the learned residual errors, we can further close the reality gap between dynamic models, simulations, and actual hardware.
arXiv Detail & Related papers (2022-09-07T15:15:12Z) - Online Dynamics Learning for Predictive Control with an Application to
Aerial Robots [3.673994921516517]
Even though prediction models can be learned and applied to model-based controllers, these models are often learned offline.
In this offline setting, training data is first collected and a prediction model is learned through an elaborated training procedure.
We propose an online dynamics learning framework that continually improves the accuracy of the dynamic model during deployment.
arXiv Detail & Related papers (2022-07-19T15:51:25Z) - Gradient-Based Trajectory Optimization With Learned Dynamics [80.41791191022139]
We use machine learning techniques to learn a differentiable dynamics model of the system from data.
We show that a neural network can model highly nonlinear behaviors accurately for large time horizons.
In our hardware experiments, we demonstrate that our learned model can represent complex dynamics for both the Spot and Radio-controlled (RC) car.
arXiv Detail & Related papers (2022-04-09T22:07:34Z) - Sample-Efficient Reinforcement Learning via Conservative Model-Based
Actor-Critic [67.00475077281212]
Model-based reinforcement learning algorithms are more sample efficient than their model-free counterparts.
We propose a novel approach that achieves high sample efficiency without the strong reliance on accurate learned models.
We show that CMBAC significantly outperforms state-of-the-art approaches in terms of sample efficiency on several challenging tasks.
arXiv Detail & Related papers (2021-12-16T15:33:11Z) - Sample Efficient Reinforcement Learning via Model-Ensemble Exploration
and Exploitation [3.728946517493471]
MEEE is a model-ensemble method that consists of optimistic exploration and weighted exploitation.
Our approach outperforms other model-free and model-based state-of-the-art methods, especially in sample complexity.
arXiv Detail & Related papers (2021-07-05T07:18:20Z) - Information Theoretic Model Predictive Q-Learning [64.74041985237105]
We present a novel theoretical connection between information theoretic MPC and entropy regularized RL.
We develop a Q-learning algorithm that can leverage biased models.
arXiv Detail & Related papers (2019-12-31T00:29:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.