Learning Stable Manoeuvres in Quadruped Robots from Expert
Demonstrations
- URL: http://arxiv.org/abs/2007.14290v1
- Date: Tue, 28 Jul 2020 15:02:04 GMT
- Title: Learning Stable Manoeuvres in Quadruped Robots from Expert
Demonstrations
- Authors: Sashank Tirumala, Sagar Gubbi, Kartik Paigwar, Aditya Sagi, Ashish
Joglekar, Shalabh Bhatnagar, Ashitava Ghosal, Bharadwaj Amrutur, Shishir
Kolathaya
- Abstract summary: Key problem is to generate leg trajectories for continuously varying target linear and angular velocities.
We propose a two pronged approach to address this problem.
We develop a neural network-based filter that takes in target velocity, radius and transforms them into new commands.
- Score: 3.893720742556156
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the research into development of quadruped robots picking up pace,
learning based techniques are being explored for developing locomotion
controllers for such robots. A key problem is to generate leg trajectories for
continuously varying target linear and angular velocities, in a stable manner.
In this paper, we propose a two pronged approach to address this problem.
First, multiple simpler policies are trained to generate trajectories for a
discrete set of target velocities and turning radius. These policies are then
augmented using a higher level neural network for handling the transition
between the learned trajectories. Specifically, we develop a neural
network-based filter that takes in target velocity, radius and transforms them
into new commands that enable smooth transitions to the new trajectory. This
transformation is achieved by learning from expert demonstrations. An
application of this is the transformation of a novice user's input into an
expert user's input, thereby ensuring stable manoeuvres regardless of the
user's experience. Training our proposed architecture requires much less expert
demonstrations compared to standard neural network architectures. Finally, we
demonstrate experimentally these results in the in-house quadruped Stoch 2.
Related papers
- Training Directional Locomotion for Quadrupedal Low-Cost Robotic Systems via Deep Reinforcement Learning [4.669957449088593]
We present Deep Reinforcement Learning training of directional locomotion for low-cost quadpedalru robots in the real world.
We exploit randomization of heading that the robot must follow to foster exploration of action-state transitions.
Changing the heading in episode resets to current yaw plus a random value drawn from a normal distribution yields policies able to follow complex trajectories.
arXiv Detail & Related papers (2025-03-14T03:53:01Z) - Gait in Eight: Efficient On-Robot Learning for Omnidirectional Quadruped Locomotion [13.314871831095882]
On-robot Reinforcement Learning is a promising approach to train embodiment-aware policies for legged robots.
We present a framework for efficiently learning quadruped locomotion in just 8 minutes of raw real-time training.
We demonstrate the robustness of our approach in different indoor and outdoor environments.
arXiv Detail & Related papers (2025-03-11T12:32:06Z) - Offline Adaptation of Quadruped Locomotion using Diffusion Models [59.882275766745295]
We present a diffusion-based approach to quadrupedal locomotion that simultaneously addresses the limitations of learning and interpolating between multiple skills.
We show that these capabilities are compatible with a multi-skill policy and can be applied with little modification and minimal compute overhead.
We verify the validity of our approach with hardware experiments on the ANYmal quadruped platform.
arXiv Detail & Related papers (2024-11-13T18:12:15Z) - Multi-Objective Algorithms for Learning Open-Ended Robotic Problems [1.0124625066746598]
Quadrupedal locomotion is a complex, open-ended problem vital to expanding autonomous vehicle reach.
Traditional reinforcement learning approaches often fall short due to training instability and sample inefficiency.
We propose a novel method leveraging multi-objective evolutionary algorithms as an automatic curriculum learning mechanism.
arXiv Detail & Related papers (2024-11-11T16:26:42Z) - Lessons from Learning to Spin "Pens" [51.9182692233916]
In this work, we push the boundaries of learning-based in-hand manipulation systems by demonstrating the capability to spin pen-like objects.
We first use reinforcement learning to train an oracle policy with privileged information and generate a high-fidelity trajectory dataset in simulation.
We then fine-tune the sensorimotor policy using these real-world trajectories to adapt it to the real world dynamics.
arXiv Detail & Related papers (2024-07-26T17:56:01Z) - Reinforcement Learning for Versatile, Dynamic, and Robust Bipedal Locomotion Control [106.32794844077534]
This paper presents a study on using deep reinforcement learning to create dynamic locomotion controllers for bipedal robots.
We develop a general control solution that can be used for a range of dynamic bipedal skills, from periodic walking and running to aperiodic jumping and standing.
This work pushes the limits of agility for bipedal robots through extensive real-world experiments.
arXiv Detail & Related papers (2024-01-30T10:48:43Z) - Barkour: Benchmarking Animal-level Agility with Quadruped Robots [70.97471756305463]
We introduce the Barkour benchmark, an obstacle course to quantify agility for legged robots.
Inspired by dog agility competitions, it consists of diverse obstacles and a time based scoring mechanism.
We present two methods for tackling the benchmark.
arXiv Detail & Related papers (2023-05-24T02:49:43Z) - Learning and Adapting Agile Locomotion Skills by Transferring Experience [71.8926510772552]
We propose a framework for training complex robotic skills by transferring experience from existing controllers to jumpstart learning new tasks.
We show that our method enables learning complex agile jumping behaviors, navigating to goal locations while walking on hind legs, and adapting to new environments.
arXiv Detail & Related papers (2023-04-19T17:37:54Z) - Learn Fast, Segment Well: Fast Object Segmentation Learning on the iCub
Robot [20.813028212068424]
We study different techniques that allow adapting an object segmentation model in presence of novel objects or different domains.
We propose a pipeline for fast instance segmentation learning for robotic applications where data come in stream.
We benchmark the proposed pipeline on two datasets and we deploy it on a real robot, iCub humanoid.
arXiv Detail & Related papers (2022-06-27T17:14:04Z) - Rapid Locomotion via Reinforcement Learning [15.373208553045416]
We present an end-to-end learned controller that achieves record agility for the MIT Mini Cheetah.
This system runs and turns fast on natural terrains like grass, ice, and gravel and responds robustly to disturbances.
arXiv Detail & Related papers (2022-05-05T17:55:11Z) - Generative Adversarial Imitation Learning for End-to-End Autonomous
Driving on Urban Environments [0.8122270502556374]
Generative Adversarial Imitation Learning (GAIL) can train policies without explicitly requiring to define a reward function.
We show that both of them are capable of imitating the expert trajectory from start to end after training ends.
arXiv Detail & Related papers (2021-10-16T15:04:13Z) - Learning to Shift Attention for Motion Generation [55.61994201686024]
One challenge of motion generation using robot learning from demonstration techniques is that human demonstrations follow a distribution with multiple modes for one task query.
Previous approaches fail to capture all modes or tend to average modes of the demonstrations and thus generate invalid trajectories.
We propose a motion generation model with extrapolation ability to overcome this problem.
arXiv Detail & Related papers (2021-02-24T09:07:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.