Improving Input-Output Linearizing Controllers for Bipedal Robots via
Reinforcement Learning
- URL: http://arxiv.org/abs/2004.07276v2
- Date: Sat, 2 May 2020 10:50:13 GMT
- Title: Improving Input-Output Linearizing Controllers for Bipedal Robots via
Reinforcement Learning
- Authors: Fernando Casta\~neda, Mathias Wulfman, Ayush Agrawal, Tyler
Westenbroek, Claire J. Tomlin, S. Shankar Sastry, Koushil Sreenath
- Abstract summary: The main drawbacks of input-output linearizing controllers are the need for precise dynamics models and not being able to account for input constraints.
In this paper, we address both challenges for the specific case of bipedal robot control by the use of reinforcement learning techniques.
- Score: 85.13138591433635
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The main drawbacks of input-output linearizing controllers are the need for
precise dynamics models and not being able to account for input constraints.
Model uncertainty is common in almost every robotic application and input
saturation is present in every real world system. In this paper, we address
both challenges for the specific case of bipedal robot control by the use of
reinforcement learning techniques. Taking the structure of a standard
input-output linearizing controller, we use an additive learned term that
compensates for model uncertainty. Moreover, by adding constraints to the
learning problem we manage to boost the performance of the final controller
when input limits are present. We demonstrate the effectiveness of the designed
framework for different levels of uncertainty on the five-link planar walking
robot RABBIT.
Related papers
- Neural Internal Model Control: Learning a Robust Control Policy via Predictive Error Feedback [16.46487826869775]
We propose a novel framework, Neural Internal Model Control, which integrates model-based control with RL-based control to enhance robustness.
Our framework streamlines the predictive model by applying Newton-Euler equations for rigid-body dynamics, eliminating the need to capture complex high-dimensional nonlinearities.
We demonstrate the effectiveness of our framework on both quadrotors and quadrupedal robots, achieving superior performance compared to state-of-the-art methods.
arXiv Detail & Related papers (2024-11-20T07:07:42Z) - On-device Self-supervised Learning of Visual Perception Tasks aboard
Hardware-limited Nano-quadrotors [53.59319391812798]
Sub-SI50gram nano-drones are gaining momentum in both academia and industry.
Their most compelling applications rely on onboard deep learning models for perception.
When deployed in unknown environments, these models often underperform due to domain shift.
We propose for the first time, on-device learning aboard nano-drones, where the first part of the in-field mission is dedicated to self-supervised fine-tuning.
arXiv Detail & Related papers (2024-03-06T22:04:14Z) - Combining model-predictive control and predictive reinforcement learning
for stable quadrupedal robot locomotion [0.0]
We study how this can be achieved by a combination of model-predictive and predictive reinforcement learning controllers.
In this work, we combine both control methods to address the quadrupedal robot stable gate generation problem.
arXiv Detail & Related papers (2023-07-15T09:22:37Z) - AI Enhanced Control Engineering Methods [66.08455276899578]
We explore how AI tools can be useful in control applications.
Two immediate applications are linearization of system dynamics for local stability analysis or for state estimation using Kalman filters.
In addition, we explore the use of machine learning models for global parameterizations of state vectors and control inputs in model predictive control applications.
arXiv Detail & Related papers (2023-06-08T20:31:14Z) - Differentiable Constrained Imitation Learning for Robot Motion Planning
and Control [0.26999000177990923]
We develop a framework for constraint robotic motion planning and control, as well as traffic agent simulation.
We focus on mobile robot and automated driving applications.
Simulated experiments of mobile robot navigation and automated driving provide evidence for the performance of the proposed method.
arXiv Detail & Related papers (2022-10-21T08:19:45Z) - Real-to-Sim: Predicting Residual Errors of Robotic Systems with Sparse
Data using a Learning-based Unscented Kalman Filter [65.93205328894608]
We learn the residual errors between a dynamic and/or simulator model and the real robot.
We show that with the learned residual errors, we can further close the reality gap between dynamic models, simulations, and actual hardware.
arXiv Detail & Related papers (2022-09-07T15:15:12Z) - Automatic Rule Induction for Efficient Semi-Supervised Learning [56.91428251227253]
Semi-supervised learning has shown promise in allowing NLP models to generalize from small amounts of labeled data.
Pretrained transformer models act as black-box correlation engines that are difficult to explain and sometimes behave unreliably.
We propose tackling both of these challenges via Automatic Rule Induction (ARI), a simple and general-purpose framework.
arXiv Detail & Related papers (2022-05-18T16:50:20Z) - OSCAR: Data-Driven Operational Space Control for Adaptive and Robust
Robot Manipulation [50.59541802645156]
Operational Space Control (OSC) has been used as an effective task-space controller for manipulation.
We propose OSC for Adaptation and Robustness (OSCAR), a data-driven variant of OSC that compensates for modeling errors.
We evaluate our method on a variety of simulated manipulation problems, and find substantial improvements over an array of controller baselines.
arXiv Detail & Related papers (2021-10-02T01:21:38Z) - Towards Safe Control of Continuum Manipulator Using Shielded Multiagent
Reinforcement Learning [1.2647816797166165]
The control of the robot is formulated as a one-DoF, one agent problem in the MADQN framework to improve the learning efficiency.
Shielded MADQN enabled the robot to perform point and trajectory tracking with submillimeter root mean square errors under external loads.
arXiv Detail & Related papers (2021-06-15T05:55:05Z) - Model-based Reinforcement Learning from Signal Temporal Logic
Specifications [0.17205106391379021]
We propose expressing desired high-level robot behavior using a formal specification language known as Signal Temporal Logic (STL) as an alternative to reward/cost functions.
The proposed algorithm is empirically evaluated on simulations of robotic system such as a pick-and-place robotic arm, and adaptive cruise control for autonomous vehicles.
arXiv Detail & Related papers (2020-11-10T07:31:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.