CasIL: Cognizing and Imitating Skills via a Dual Cognition-Action
Architecture
- URL: http://arxiv.org/abs/2309.16299v1
- Date: Thu, 28 Sep 2023 09:53:05 GMT
- Title: CasIL: Cognizing and Imitating Skills via a Dual Cognition-Action
Architecture
- Authors: Zixuan Chen, Ze Ji, Shuyang Liu, Jing Huo, Yiyu Chen, Yang Gao
- Abstract summary: Existing imitation learning approaches for robots still grapple with sub-optimal performance in complex tasks.
Heuristically, we extend the usual notion of action to a dual Cognition (high-level)-Action (low-level) architecture.
We propose a novel skill IL framework through human-robot interaction, called Cognition-Action-based Skill Imitation Learning (CasIL)
- Score: 20.627616015484648
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Enabling robots to effectively imitate expert skills in longhorizon tasks
such as locomotion, manipulation, and more, poses a long-standing challenge.
Existing imitation learning (IL) approaches for robots still grapple with
sub-optimal performance in complex tasks. In this paper, we consider how this
challenge can be addressed within the human cognitive priors. Heuristically, we
extend the usual notion of action to a dual Cognition (high-level)-Action
(low-level) architecture by introducing intuitive human cognitive priors, and
propose a novel skill IL framework through human-robot interaction, called
Cognition-Action-based Skill Imitation Learning (CasIL), for the robotic agent
to effectively cognize and imitate the critical skills from raw visual
demonstrations. CasIL enables both cognition and action imitation, while
high-level skill cognition explicitly guides low-level primitive actions,
providing robustness and reliability to the entire skill IL process. We
evaluated our method on MuJoCo and RLBench benchmarks, as well as on the
obstacle avoidance and point-goal navigation tasks for quadrupedal robot
locomotion. Experimental results show that our CasIL consistently achieves
competitive and robust skill imitation capability compared to other
counterparts in a variety of long-horizon robotic tasks.
Related papers
- Towards Human-level Dexterity via Robot Learning [1.3910668204452978]
Dexterous intelligence is a pinnacle of human physical intelligence and emergent higher-order cognitive skills.<n>Many million years were spent co-evolving the human brain and hands including rich tactile sensing.<n>This thesis explores a new paradigm of using visuo-tactile human demonstrations for dexterity, introducing corresponding imitation learning techniques.
arXiv Detail & Related papers (2025-07-12T02:22:55Z) - Truly Self-Improving Agents Require Intrinsic Metacognitive Learning [59.60803539959191]
Self-improving agents aim to continuously acquire new capabilities with minimal supervision.<n>Current approaches face two key limitations: their self-improvement processes are often rigid, fail to generalize across tasks domains, and struggle to scale with increasing agent capabilities.<n>We argue that effective self-improvement requires intrinsic metacognitive learning, defined as an agent's intrinsic ability to actively evaluate, reflect on, and adapt its own learning processes.
arXiv Detail & Related papers (2025-06-05T14:53:35Z) - SPIRE: Synergistic Planning, Imitation, and Reinforcement Learning for Long-Horizon Manipulation [58.14969377419633]
We propose spire, a system that decomposes tasks into smaller learning subproblems and second combines imitation and reinforcement learning to maximize their strengths.
We find that spire outperforms prior approaches that integrate imitation learning, reinforcement learning, and planning by 35% to 50% in average task performance.
arXiv Detail & Related papers (2024-10-23T17:42:07Z) - Unsupervised Skill Discovery for Robotic Manipulation through Automatic Task Generation [17.222197596599685]
We propose a Skill Learning approach that discovers composable behaviors by solving a large number of autonomously generated tasks.
Our method learns skills allowing the robot to consistently and robustly interact with objects in its environment.
The learned skills can be used to solve a set of unseen manipulation tasks, in simulation as well as on a real robotic platform.
arXiv Detail & Related papers (2024-10-07T09:19:13Z) - Incremental procedural and sensorimotor learning in cognitive humanoid
robots [52.77024349608834]
This work presents a cognitive agent that can learn procedures incrementally.
We show the cognitive functions required in each substage and how adding new functions helps address tasks previously unsolved by the agent.
Results show that this approach is capable of solving complex tasks incrementally.
arXiv Detail & Related papers (2023-04-30T22:51:31Z) - Robot Skill Adaptation via Soft Actor-Critic Gaussian Mixture Models [29.34375999491465]
A core challenge for an autonomous agent acting in the real world is to adapt its repertoire of skills to cope with its noisy perception and dynamics.
To scale learning of skills to long-horizon tasks, robots should be able to learn and later refine their skills in a structured manner.
We proposeSAC-GMM, a novel hybrid approach that learns robot skills through a dynamical system and adapts the learned skills in their own trajectory distribution space.
arXiv Detail & Related papers (2021-11-25T15:36:11Z) - Bottom-Up Skill Discovery from Unsegmented Demonstrations for
Long-Horizon Robot Manipulation [55.31301153979621]
We tackle real-world long-horizon robot manipulation tasks through skill discovery.
We present a bottom-up approach to learning a library of reusable skills from unsegmented demonstrations.
Our method has shown superior performance over state-of-the-art imitation learning methods in multi-stage manipulation tasks.
arXiv Detail & Related papers (2021-09-28T16:18:54Z) - Example-Driven Model-Based Reinforcement Learning for Solving
Long-Horizon Visuomotor Tasks [85.56153200251713]
We introduce EMBR, a model-based RL method for learning primitive skills that are suitable for completing long-horizon visuomotor tasks.
On a Franka Emika robot arm, we find that EMBR enables the robot to complete three long-horizon visuomotor tasks at 85% success rate.
arXiv Detail & Related papers (2021-09-21T16:48:07Z) - Discovering Generalizable Skills via Automated Generation of Diverse
Tasks [82.16392072211337]
We propose a method to discover generalizable skills via automated generation of a diverse set of tasks.
As opposed to prior work on unsupervised discovery of skills, our method pairs each skill with a unique task produced by a trainable task generator.
A task discriminator defined on the robot behaviors in the generated tasks is jointly trained to estimate the evidence lower bound of the diversity objective.
The learned skills can then be composed in a hierarchical reinforcement learning algorithm to solve unseen target tasks.
arXiv Detail & Related papers (2021-06-26T03:41:51Z) - SKID RAW: Skill Discovery from Raw Trajectories [23.871402375721285]
It is desirable to only demonstrate full task executions instead of all individual skills.
We propose a novel approach that simultaneously learns to segment trajectories into reoccurring patterns.
The approach learns a skill conditioning that can be used to understand possible sequences of skills.
arXiv Detail & Related papers (2021-03-26T17:27:13Z) - Active Hierarchical Imitation and Reinforcement Learning [0.0]
In this project, we explored different imitation learning algorithms and designed active learning algorithms upon the hierarchical imitation and reinforcement learning framework we have developed.
Our experimental results showed that using DAgger and reward-based active learning method can achieve better performance while saving more human efforts physically and mentally during the training process.
arXiv Detail & Related papers (2020-12-14T08:27:27Z) - REAL-X -- Robot open-Ended Autonomous Learning Architectures: Achieving
Truly End-to-End Sensorimotor Autonomous Learning Systems [0.0]
We study the challenges posed by the previously proposed benchmark REAL competition'
We present a set of REAL-X' robot architectures that are able to solve different versions of the benchmark.
arXiv Detail & Related papers (2020-11-27T18:12:06Z) - Learning Agile Robotic Locomotion Skills by Imitating Animals [72.36395376558984]
Reproducing the diverse and agile locomotion skills of animals has been a longstanding challenge in robotics.
We present an imitation learning system that enables legged robots to learn agile locomotion skills by imitating real-world animals.
arXiv Detail & Related papers (2020-04-02T02:56:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.