Robust Robot Walker: Learning Agile Locomotion over Tiny Traps
- URL: http://arxiv.org/abs/2409.07409v2
- Date: Thu, 12 Sep 2024 15:35:49 GMT
- Title: Robust Robot Walker: Learning Agile Locomotion over Tiny Traps
- Authors: Shaoting Zhu, Runhan Huang, Linzhan Mou, Hang Zhao,
- Abstract summary: We propose a novel approach that enables quadruped robots to pass various small obstacles, or "tiny traps"
Existing methods often rely on exteroceptive sensors, which can be unreliable for detecting such tiny traps.
We introduce a two-stage training framework incorporating a contact encoder and a classification head to learn implicit representations of different traps.
- Score: 28.920959351960413
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Quadruped robots must exhibit robust walking capabilities in practical applications. In this work, we propose a novel approach that enables quadruped robots to pass various small obstacles, or "tiny traps". Existing methods often rely on exteroceptive sensors, which can be unreliable for detecting such tiny traps. To overcome this limitation, our approach focuses solely on proprioceptive inputs. We introduce a two-stage training framework incorporating a contact encoder and a classification head to learn implicit representations of different traps. Additionally, we design a set of tailored reward functions to improve both the stability of training and the ease of deployment for goal-tracking tasks. To benefit further research, we design a new benchmark for tiny trap task. Extensive experiments in both simulation and real-world settings demonstrate the effectiveness and robustness of our method. Project Page: https://robust-robot-walker.github.io/
Related papers
- Transferable Latent-to-Latent Locomotion Policy for Efficient and Versatile Motion Control of Diverse Legged Robots [9.837559106057814]
The pretrain-and-finetune paradigm offers a promising approach for efficiently adapting to new robot entities and tasks.
We propose a latent training framework where a transferable latent-to-latent locomotion policy is pretrained alongside diverse task-specific observation encoders and action decoders.
We validate our approach through extensive simulations and real-world experiments, demonstrating that the pretrained latent-to-latent locomotion policy effectively generalizes to new robot entities and tasks with improved efficiency.
arXiv Detail & Related papers (2025-03-22T03:01:25Z) - Gait in Eight: Efficient On-Robot Learning for Omnidirectional Quadruped Locomotion [13.314871831095882]
On-robot Reinforcement Learning is a promising approach to train embodiment-aware policies for legged robots.
We present a framework for efficiently learning quadruped locomotion in just 8 minutes of raw real-time training.
We demonstrate the robustness of our approach in different indoor and outdoor environments.
arXiv Detail & Related papers (2025-03-11T12:32:06Z) - Towards Real-World Efficiency: Domain Randomization in Reinforcement Learning for Pre-Capture of Free-Floating Moving Targets by Autonomous Robots [0.0]
We introduce a deep reinforcement learning-based control approach to address the intricate challenge of the robotic pre-grasping phase under microgravity conditions.
Our methodology incorporates an off-policy reinforcement learning framework, employing the soft actor-critic technique to enable the gripper to proficiently approach a free-floating moving object.
For effective learning of the pre-grasping approach task, we developed a reward function that offers the agent clear and insightful feedback.
arXiv Detail & Related papers (2024-06-10T16:54:51Z) - Learning and Adapting Agile Locomotion Skills by Transferring Experience [71.8926510772552]
We propose a framework for training complex robotic skills by transferring experience from existing controllers to jumpstart learning new tasks.
We show that our method enables learning complex agile jumping behaviors, navigating to goal locations while walking on hind legs, and adapting to new environments.
arXiv Detail & Related papers (2023-04-19T17:37:54Z) - Learning Reward Functions for Robotic Manipulation by Observing Humans [92.30657414416527]
We use unlabeled videos of humans solving a wide range of manipulation tasks to learn a task-agnostic reward function for robotic manipulation policies.
The learned rewards are based on distances to a goal in an embedding space learned using a time-contrastive objective.
arXiv Detail & Related papers (2022-11-16T16:26:48Z) - Leveraging Sequentiality in Reinforcement Learning from a Single
Demonstration [68.94506047556412]
We propose to leverage a sequential bias to learn control policies for complex robotic tasks using a single demonstration.
We show that DCIL-II can solve with unprecedented sample efficiency some challenging simulated tasks such as humanoid locomotion and stand-up.
arXiv Detail & Related papers (2022-11-09T10:28:40Z) - Automatic Acquisition of a Repertoire of Diverse Grasping Trajectories
through Behavior Shaping and Novelty Search [0.0]
We introduce an approach to generate diverse grasping movements in order to solve this problem.
The movements are generated in simulation, for particular object positions.
Although we show that generated movements actually work on a real Baxter robot, the aim is to use this method to create a large dataset to bootstrap deep learning methods.
arXiv Detail & Related papers (2022-05-17T09:17:31Z) - Learning Perceptual Concepts by Bootstrapping from Human Queries [41.07749131023931]
We propose a new approach whereby the robot learns a low-dimensional variant of the concept and uses it to generate a larger data set for learning the concept in the high-dimensional space.
This lets it take advantage of semantically meaningful privileged information only accessible at training time, like object poses and bounding boxes, that allows for richer human interaction to speed up learning.
arXiv Detail & Related papers (2021-11-09T16:43:46Z) - Lifelong Robotic Reinforcement Learning by Retaining Experiences [61.79346922421323]
Many multi-task reinforcement learning efforts assume the robot can collect data from all tasks at all times.
In this work, we study a practical sequential multi-task RL problem motivated by the practical constraints of physical robotic systems.
We derive an approach that effectively leverages the data and policies learned for previous tasks to cumulatively grow the robot's skill-set.
arXiv Detail & Related papers (2021-09-19T18:00:51Z) - Bayesian Meta-Learning for Few-Shot Policy Adaptation Across Robotic
Platforms [60.59764170868101]
Reinforcement learning methods can achieve significant performance but require a large amount of training data collected on the same robotic platform.
We formulate it as a few-shot meta-learning problem where the goal is to find a model that captures the common structure shared across different robotic platforms.
We experimentally evaluate our framework on a simulated reaching and a real-robot picking task using 400 simulated robots.
arXiv Detail & Related papers (2021-03-05T14:16:20Z) - Learning to Shift Attention for Motion Generation [55.61994201686024]
One challenge of motion generation using robot learning from demonstration techniques is that human demonstrations follow a distribution with multiple modes for one task query.
Previous approaches fail to capture all modes or tend to average modes of the demonstrations and thus generate invalid trajectories.
We propose a motion generation model with extrapolation ability to overcome this problem.
arXiv Detail & Related papers (2021-02-24T09:07:52Z) - Learning Stable Manoeuvres in Quadruped Robots from Expert
Demonstrations [3.893720742556156]
Key problem is to generate leg trajectories for continuously varying target linear and angular velocities.
We propose a two pronged approach to address this problem.
We develop a neural network-based filter that takes in target velocity, radius and transforms them into new commands.
arXiv Detail & Related papers (2020-07-28T15:02:04Z) - Efficient reinforcement learning control for continuum robots based on
Inexplicit Prior Knowledge [3.3645162441357437]
We propose an efficient reinforcement learning method based on inexplicit prior knowledge.
By using our method, we can achieve active visual tracking and distance maintenance of a tendon-driven robot.
arXiv Detail & Related papers (2020-02-26T15:47:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.