Learning Policies for Continuous Control via Transition Models
- URL: http://arxiv.org/abs/2209.08033v1
- Date: Fri, 16 Sep 2022 16:23:48 GMT
- Title: Learning Policies for Continuous Control via Transition Models
- Authors: Justus Huebotter, Serge Thill, Marcel van Gerven, Pablo Lanillos
- Abstract summary: In robot control, moving an arm's end-effector to a target position or along a target trajectory requires accurate forward and inverse models.
We show that by learning the transition (forward) model from interaction, we can use it to drive the learning of an amortized policy.
- Score: 2.831332389089239
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: It is doubtful that animals have perfect inverse models of their limbs (e.g.,
what muscle contraction must be applied to every joint to reach a particular
location in space). However, in robot control, moving an arm's end-effector to
a target position or along a target trajectory requires accurate forward and
inverse models. Here we show that by learning the transition (forward) model
from interaction, we can use it to drive the learning of an amortized policy.
Hence, we revisit policy optimization in relation to the deep active inference
framework and describe a modular neural network architecture that
simultaneously learns the system dynamics from prediction errors and the
stochastic policy that generates suitable continuous control commands to reach
a desired reference position. We evaluated the model by comparing it against
the baseline of a linear quadratic regulator, and conclude with additional
steps to take toward human-like motor control.
Related papers
- DTC: Deep Tracking Control [16.2850135844455]
We propose a hybrid control architecture that combines the advantages of both worlds to achieve greater robustness, foot-placement accuracy, and terrain generalization.
A deep neural network policy is trained in simulation, aiming to track the optimized footholds.
We demonstrate superior robustness in the presence of slippery or deformable ground when compared to model-based counterparts.
arXiv Detail & Related papers (2023-09-27T07:57:37Z) - Model-free tracking control of complex dynamical trajectories with
machine learning [0.2356141385409842]
We develop a model-free, machine-learning framework to control a two-arm robotic manipulator.
We demonstrate the effectiveness of the control framework using a variety of periodic and chaotic signals.
arXiv Detail & Related papers (2023-09-20T17:10:10Z) - Model-Based Reinforcement Learning with Isolated Imaginations [61.67183143982074]
We propose Iso-Dream++, a model-based reinforcement learning approach.
We perform policy optimization based on the decoupled latent imaginations.
This enables long-horizon visuomotor control tasks to benefit from isolating mixed dynamics sources in the wild.
arXiv Detail & Related papers (2023-03-27T02:55:56Z) - Real-to-Sim: Predicting Residual Errors of Robotic Systems with Sparse
Data using a Learning-based Unscented Kalman Filter [65.93205328894608]
We learn the residual errors between a dynamic and/or simulator model and the real robot.
We show that with the learned residual errors, we can further close the reality gap between dynamic models, simulations, and actual hardware.
arXiv Detail & Related papers (2022-09-07T15:15:12Z) - Physics-Inspired Temporal Learning of Quadrotor Dynamics for Accurate
Model Predictive Trajectory Tracking [76.27433308688592]
Accurately modeling quadrotor's system dynamics is critical for guaranteeing agile, safe, and stable navigation.
We present a novel Physics-Inspired Temporal Convolutional Network (PI-TCN) approach to learning quadrotor's system dynamics purely from robot experience.
Our approach combines the expressive power of sparse temporal convolutions and dense feed-forward connections to make accurate system predictions.
arXiv Detail & Related papers (2022-06-07T13:51:35Z) - Learning Contraction Policies from Offline Data [1.5771347525430772]
We propose a data-driven method for learning convergent control policies from offline data using Contraction theory.
We learn the control policy and its corresponding contraction metric while enforcing contraction.
We evaluate the performance of our proposed framework on simulated robotic goal-reaching tasks.
arXiv Detail & Related papers (2021-12-11T03:48:51Z) - An Adaptable Approach to Learn Realistic Legged Locomotion without
Examples [38.81854337592694]
This work proposes a generic approach for ensuring realism in locomotion by guiding the learning process with the spring-loaded inverted pendulum model as a reference.
We present experimental results showing that even in a model-free setup, the learned policies can generate realistic and energy-efficient locomotion gaits for a bipedal and a quadrupedal robot.
arXiv Detail & Related papers (2021-10-28T10:14:47Z) - GEM: Group Enhanced Model for Learning Dynamical Control Systems [78.56159072162103]
We build effective dynamical models that are amenable to sample-based learning.
We show that learning the dynamics on a Lie algebra vector space is more effective than learning a direct state transition model.
This work sheds light on a connection between learning of dynamics and Lie group properties, which opens doors for new research directions.
arXiv Detail & Related papers (2021-04-07T01:08:18Z) - Reinforcement Learning for Robust Parameterized Locomotion Control of
Bipedal Robots [121.42930679076574]
We present a model-free reinforcement learning framework for training robust locomotion policies in simulation.
domain randomization is used to encourage the policies to learn behaviors that are robust across variations in system dynamics.
We demonstrate this on versatile walking behaviors such as tracking a target walking velocity, walking height, and turning yaw.
arXiv Detail & Related papers (2021-03-26T07:14:01Z) - Meta-Reinforcement Learning for Adaptive Motor Control in Changing Robot
Dynamics and Environments [3.5309638744466167]
This work developed a meta-learning approach that adapts the control policy on the fly to different changing conditions for robust locomotion.
The proposed method constantly updates the interaction model, samples feasible sequences of actions of estimated the state-action trajectories, and then applies the optimal actions to maximize the reward.
arXiv Detail & Related papers (2021-01-19T12:57:12Z) - Neural Dynamic Policies for End-to-End Sensorimotor Learning [51.24542903398335]
The current dominant paradigm in sensorimotor control, whether imitation or reinforcement learning, is to train policies directly in raw action spaces.
We propose Neural Dynamic Policies (NDPs) that make predictions in trajectory distribution space.
NDPs outperform the prior state-of-the-art in terms of either efficiency or performance across several robotic control tasks.
arXiv Detail & Related papers (2020-12-04T18:59:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.