Unlock Reliable Skill Inference for Quadruped Adaptive Behavior by Skill Graph
- URL: http://arxiv.org/abs/2311.06015v2
- Date: Wed, 26 Feb 2025 18:10:09 GMT
- Title: Unlock Reliable Skill Inference for Quadruped Adaptive Behavior by Skill Graph
- Authors: Hongyin Zhang, Diyuan Shi, Zifeng Zhuang, Han Zhao, Zhenyu Wei, Feng Zhao, Sibo Gai, Shangke Lyu, Donglin Wang,
- Abstract summary: We propose a novel framework, named Robot Skill Graph (RSG), for organizing a massive set of fundamental skills of robots.<n>RSG is composed of massive dynamic behavioral skills instead of static knowledge in KG.<n>We show that RSG can provide reliable skill inference upon new tasks and environments.
- Score: 26.861541495975686
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Developing robotic intelligent systems that can adapt quickly to unseen wild situations is one of the critical challenges in pursuing autonomous robotics. Although some impressive progress has been made in walking stability and skill learning in the field of legged robots, their ability for fast adaptation is still inferior to that of animals in nature. Animals are born with a massive set of skills needed to survive, and can quickly acquire new ones, by composing fundamental skills with limited experience. Inspired by this, we propose a novel framework, named Robot Skill Graph (RSG) for organizing a massive set of fundamental skills of robots and dexterously reusing them for fast adaptation. Bearing a structure similar to the Knowledge Graph (KG), RSG is composed of massive dynamic behavioral skills instead of static knowledge in KG and enables discovering implicit relations that exist in between the learning context and acquired skills of robots, serving as a starting point for understanding subtle patterns existing in robots' skill learning. Extensive experimental results demonstrate that RSG can provide reliable skill inference upon new tasks and environments, and enable quadruped robots to adapt to new scenarios and quickly learn new skills.
Related papers
- Neural Associative Skill Memories for safer robotics and modelling human sensorimotor repertoires [8.047222674695288]
Associative Skill Memories (ASMs) aim to link movement primitives to sensory feedback, but existing implementations rely on hard-coded libraries of individual skills.<n>Here we introduce Neural Associative Skill Memories (ASMs), a framework that utilise self-supervised predictive coding for temporal prediction.<n>Unlike traditional ASMs which require explicit skill selection, Neural ASMs implicitly recognize and express skills through contextual inference.
arXiv Detail & Related papers (2025-05-14T19:46:23Z) - Unsupervised Skill Discovery for Robotic Manipulation through Automatic Task Generation [17.222197596599685]
We propose a Skill Learning approach that discovers composable behaviors by solving a large number of autonomously generated tasks.
Our method learns skills allowing the robot to consistently and robustly interact with objects in its environment.
The learned skills can be used to solve a set of unseen manipulation tasks, in simulation as well as on a real robotic platform.
arXiv Detail & Related papers (2024-10-07T09:19:13Z) - Generalized Animal Imitator: Agile Locomotion with Versatile Motion Prior [14.114972332185044]
This paper introduces the Versatile Motion prior (VIM) - a Reinforcement Learning framework designed to incorporate a range of agile locomotion tasks.
Our framework enables legged robots to learn diverse agile low-level skills by imitating animal motions and manually designed motions.
Our evaluations of the VIM framework span both simulation environments and real-world deployment.
arXiv Detail & Related papers (2023-10-02T17:59:24Z) - Lifelike Agility and Play in Quadrupedal Robots using Reinforcement Learning and Generative Pre-trained Models [28.519964304030236]
We propose a hierarchical framework to construct primitive-, environmental- and strategic-level knowledge that are all pre-trainable, reusable and enrichable for legged robots.
The primitive module summarizes knowledge from animal motion data, where, inspired by large pre-trained models in language and image understanding, we introduce deep generative models to produce motor control signals stimulating legged robots to act like real animals.
We apply the trained hierarchical controllers to the MAX robot, a quadrupedal robot developed in-house, to mimic animals, traverse complex obstacles and play in a designed challenging multi-agent chase tag game.
arXiv Detail & Related papers (2023-08-29T09:22:12Z) - Barkour: Benchmarking Animal-level Agility with Quadruped Robots [70.97471756305463]
We introduce the Barkour benchmark, an obstacle course to quantify agility for legged robots.
Inspired by dog agility competitions, it consists of diverse obstacles and a time based scoring mechanism.
We present two methods for tackling the benchmark.
arXiv Detail & Related papers (2023-05-24T02:49:43Z) - Learning and Adapting Agile Locomotion Skills by Transferring Experience [71.8926510772552]
We propose a framework for training complex robotic skills by transferring experience from existing controllers to jumpstart learning new tasks.
We show that our method enables learning complex agile jumping behaviors, navigating to goal locations while walking on hind legs, and adapting to new environments.
arXiv Detail & Related papers (2023-04-19T17:37:54Z) - Online Damage Recovery for Physical Robots with Hierarchical
Quality-Diversity [3.899855581265355]
We introduce the Hierarchical Trial and Error algorithm, which uses a hierarchical behavioural repertoire to learn diverse skills.
We show that the hierarchical decomposition of skills enables the robot to learn more complex behaviours while keeping the learning of the repertoire tractable.
arXiv Detail & Related papers (2022-10-18T15:02:41Z) - Hierarchical Quality-Diversity for Online Damage Recovery [1.376408511310322]
We introduce the Hierarchical Trial and Error algorithm, which uses a hierarchical behavioural repertoire to learn diverse skills.
We show that the hierarchical decomposition of skills enables the robot to learn more complex behaviours while keeping the learning of the repertoire tractable.
arXiv Detail & Related papers (2022-04-12T11:44:01Z) - Example-Driven Model-Based Reinforcement Learning for Solving
Long-Horizon Visuomotor Tasks [85.56153200251713]
We introduce EMBR, a model-based RL method for learning primitive skills that are suitable for completing long-horizon visuomotor tasks.
On a Franka Emika robot arm, we find that EMBR enables the robot to complete three long-horizon visuomotor tasks at 85% success rate.
arXiv Detail & Related papers (2021-09-21T16:48:07Z) - Adaptation of Quadruped Robot Locomotion with Meta-Learning [64.71260357476602]
We demonstrate that meta-reinforcement learning can be used to successfully train a robot capable to solve a wide range of locomotion tasks.
The performance of the meta-trained robot is similar to that of a robot that is trained on a single task.
arXiv Detail & Related papers (2021-07-08T10:37:18Z) - Discovering Generalizable Skills via Automated Generation of Diverse
Tasks [82.16392072211337]
We propose a method to discover generalizable skills via automated generation of a diverse set of tasks.
As opposed to prior work on unsupervised discovery of skills, our method pairs each skill with a unique task produced by a trainable task generator.
A task discriminator defined on the robot behaviors in the generated tasks is jointly trained to estimate the evidence lower bound of the diversity objective.
The learned skills can then be composed in a hierarchical reinforcement learning algorithm to solve unseen target tasks.
arXiv Detail & Related papers (2021-06-26T03:41:51Z) - SKID RAW: Skill Discovery from Raw Trajectories [23.871402375721285]
It is desirable to only demonstrate full task executions instead of all individual skills.
We propose a novel approach that simultaneously learns to segment trajectories into reoccurring patterns.
The approach learns a skill conditioning that can be used to understand possible sequences of skills.
arXiv Detail & Related papers (2021-03-26T17:27:13Z) - Learning Agile Robotic Locomotion Skills by Imitating Animals [72.36395376558984]
Reproducing the diverse and agile locomotion skills of animals has been a longstanding challenge in robotics.
We present an imitation learning system that enables legged robots to learn agile locomotion skills by imitating real-world animals.
arXiv Detail & Related papers (2020-04-02T02:56:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.