Adaptive Control Strategy for Quadruped Robots in Actuator Degradation
Scenarios
- URL: http://arxiv.org/abs/2312.17606v1
- Date: Fri, 29 Dec 2023 14:04:45 GMT
- Title: Adaptive Control Strategy for Quadruped Robots in Actuator Degradation
Scenarios
- Authors: Xinyuan Wu, Wentao Dong, Hang Lai, Yong Yu and Ying Wen
- Abstract summary: This paper introduces a teacher-student framework rooted in reinforcement learning, named Actuator Degradation Adaptation Transformer (ADAPT)
ADAPT produces a unified control strategy, enabling the robot to sustain its locomotion and perform tasks despite sudden joint actuator faults.
Empirical evaluations on the Unitree A1 platform validate the deployability and effectiveness of Adapt on real-world quadruped robots.
- Score: 16.148061952978246
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Quadruped robots have strong adaptability to extreme environments but may
also experience faults. Once these faults occur, robots must be repaired before
returning to the task, reducing their practical feasibility. One prevalent
concern among these faults is actuator degradation, stemming from factors like
device aging or unexpected operational events. Traditionally, addressing this
problem has relied heavily on intricate fault-tolerant design, which demands
deep domain expertise from developers and lacks generalizability.
Learning-based approaches offer effective ways to mitigate these limitations,
but a research gap exists in effectively deploying such methods on real-world
quadruped robots. This paper introduces a pioneering teacher-student framework
rooted in reinforcement learning, named Actuator Degradation Adaptation
Transformer (ADAPT), aimed at addressing this research gap. This framework
produces a unified control strategy, enabling the robot to sustain its
locomotion and perform tasks despite sudden joint actuator faults, relying
exclusively on its internal sensors. Empirical evaluations on the Unitree A1
platform validate the deployability and effectiveness of Adapt on real-world
quadruped robots, and affirm the robustness and practicality of our approach.
Related papers
- Commonsense Reasoning for Legged Robot Adaptation with Vision-Language Models [81.55156507635286]
Legged robots are physically capable of navigating a diverse variety of environments and overcoming a wide range of obstructions.
Current learning methods often struggle with generalization to the long tail of unexpected situations without heavy human supervision.
We propose a system, VLM-Predictive Control (VLM-PC), combining two key components that we find to be crucial for eliciting on-the-fly, adaptive behavior selection.
arXiv Detail & Related papers (2024-07-02T21:00:30Z) - Adaptable Recovery Behaviors in Robotics: A Behavior Trees and Motion Generators(BTMG) Approach for Failure Management [0.0]
We propose a novel approach that models recovery behaviors as adaptable robotic skills, leveraging the Behavior Trees and Motion Generators(BTMG) framework for policy representation.
We assess our methodology through a series of progressively challenging scenarios within a peg-in-a-hole task, demonstrating the approach's effectiveness in enhancing operational efficiency and task success rates in collaborative robotics settings.
arXiv Detail & Related papers (2024-04-09T08:56:43Z) - Unsupervised Learning of Effective Actions in Robotics [0.9374652839580183]
Current state-of-the-art action representations in robotics lack proper effect-driven learning of the robot's actions.
We propose an unsupervised algorithm to discretize a continuous motion space and generate "action prototypes"
We evaluate our method on a simulated stair-climbing reinforcement learning task.
arXiv Detail & Related papers (2024-04-03T13:28:52Z) - Bridging Active Exploration and Uncertainty-Aware Deployment Using
Probabilistic Ensemble Neural Network Dynamics [11.946807588018595]
This paper presents a unified model-based reinforcement learning framework that bridges active exploration and uncertainty-aware deployment.
The two opposing tasks of exploration and deployment are optimized through state-of-the-art sampling-based MPC.
We conduct experiments on both autonomous vehicles and wheeled robots, showing promising results for both exploration and deployment.
arXiv Detail & Related papers (2023-05-20T17:20:12Z) - Learning and Adapting Agile Locomotion Skills by Transferring Experience [71.8926510772552]
We propose a framework for training complex robotic skills by transferring experience from existing controllers to jumpstart learning new tasks.
We show that our method enables learning complex agile jumping behaviors, navigating to goal locations while walking on hind legs, and adapting to new environments.
arXiv Detail & Related papers (2023-04-19T17:37:54Z) - Revisiting the Adversarial Robustness-Accuracy Tradeoff in Robot
Learning [121.9708998627352]
Recent work has shown that, in practical robot learning applications, the effects of adversarial training do not pose a fair trade-off.
This work revisits the robustness-accuracy trade-off in robot learning by analyzing if recent advances in robust training methods and theory can make adversarial training suitable for real-world robot applications.
arXiv Detail & Related papers (2022-04-15T08:12:15Z) - Reinforcement Learning with Adaptive Curriculum Dynamics Randomization
for Fault-Tolerant Robot Control [4.9631159466100305]
The ACDR algorithm can adaptively train a quadruped robot in random actuator failure conditions.
The ACDR algorithm can be used to build a robot system that does not require additional modules for detecting actuator failures.
arXiv Detail & Related papers (2021-11-19T01:55:57Z) - Adversarial Training is Not Ready for Robot Learning [55.493354071227174]
Adversarial training is an effective method to train deep learning models that are resilient to norm-bounded perturbations.
We show theoretically and experimentally that neural controllers obtained via adversarial training are subjected to three types of defects.
Our results suggest that adversarial training is not yet ready for robot learning.
arXiv Detail & Related papers (2021-03-15T07:51:31Z) - Improving Input-Output Linearizing Controllers for Bipedal Robots via
Reinforcement Learning [85.13138591433635]
The main drawbacks of input-output linearizing controllers are the need for precise dynamics models and not being able to account for input constraints.
In this paper, we address both challenges for the specific case of bipedal robot control by the use of reinforcement learning techniques.
arXiv Detail & Related papers (2020-04-15T18:15:49Z) - Enhanced Adversarial Strategically-Timed Attacks against Deep
Reinforcement Learning [91.13113161754022]
We introduce timing-based adversarial strategies against a DRL-based navigation system by jamming in physical noise patterns on the selected time frames.
Our experimental results show that the adversarial timing attacks can lead to a significant performance drop.
arXiv Detail & Related papers (2020-02-20T21:39:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.