RObotic MAnipulation Network (ROMAN) $\unicode{x2013}$ Hybrid
Hierarchical Learning for Solving Complex Sequential Tasks
- URL: http://arxiv.org/abs/2307.00125v2
- Date: Fri, 7 Jul 2023 17:26:33 GMT
- Title: RObotic MAnipulation Network (ROMAN) $\unicode{x2013}$ Hybrid
Hierarchical Learning for Solving Complex Sequential Tasks
- Authors: Eleftherios Triantafyllidis, Fernando Acero, Zhaocheng Liu and Zhibin
Li
- Abstract summary: We present a Hybrid Hierarchical Learning framework, the Robotic Manipulation Network (ROMAN)
ROMAN achieves task versatility and robust failure recovery by integrating behavioural cloning, imitation learning, and reinforcement learning.
Experimental results show that by orchestrating and activating these specialised manipulation experts, ROMAN generates correct sequential activations for accomplishing long sequences of sophisticated manipulation tasks.
- Score: 70.69063219750952
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Solving long sequential tasks poses a significant challenge in embodied
artificial intelligence. Enabling a robotic system to perform diverse
sequential tasks with a broad range of manipulation skills is an active area of
research. In this work, we present a Hybrid Hierarchical Learning framework,
the Robotic Manipulation Network (ROMAN), to address the challenge of solving
multiple complex tasks over long time horizons in robotic manipulation. ROMAN
achieves task versatility and robust failure recovery by integrating
behavioural cloning, imitation learning, and reinforcement learning. It
consists of a central manipulation network that coordinates an ensemble of
various neural networks, each specialising in distinct re-combinable sub-tasks
to generate their correct in-sequence actions for solving complex long-horizon
manipulation tasks. Experimental results show that by orchestrating and
activating these specialised manipulation experts, ROMAN generates correct
sequential activations for accomplishing long sequences of sophisticated
manipulation tasks and achieving adaptive behaviours beyond demonstrations,
while exhibiting robustness to various sensory noises. These results
demonstrate the significance and versatility of ROMAN's dynamic adaptability
featuring autonomous failure recovery capabilities, and highlight its potential
for various autonomous manipulation tasks that demand adaptive motor skills.
Related papers
- Dexterous Manipulation from Images: Autonomous Real-World RL via Substep
Guidance [71.36749876465618]
We describe a system for vision-based dexterous manipulation that provides a "programming-free" approach for users to define new tasks.
Our system includes a framework for users to define a final task and intermediate sub-tasks with image examples.
experimental results with a four-finger robotic hand learning multi-stage object manipulation tasks directly in the real world.
arXiv Detail & Related papers (2022-12-19T22:50:40Z) - CLAS: Coordinating Multi-Robot Manipulation with Central Latent Action
Spaces [9.578169216444813]
This paper proposes an approach to coordinating multi-robot manipulation through learned latent action spaces that are shared across different agents.
We validate our method in simulated multi-robot manipulation tasks and demonstrate improvement over previous baselines in terms of sample efficiency and learning performance.
arXiv Detail & Related papers (2022-11-28T23:20:47Z) - Discovering Unsupervised Behaviours from Full-State Trajectories [1.827510863075184]
We propose an analysis of Autonomous Robots Realising their Abilities; a Quality-Diversity algorithm that autonomously finds behavioural characterisations.
We evaluate this approach on a simulated robotic environment, where the robot has to autonomously discover its abilities from its full-state trajectories.
More specifically, the analysed approach autonomously finds policies that make the robot move to diverse positions, but also utilise its legs in diverse ways, and even perform half-rolls.
arXiv Detail & Related papers (2022-11-22T16:57:52Z) - Leveraging Sequentiality in Reinforcement Learning from a Single
Demonstration [68.94506047556412]
We propose to leverage a sequential bias to learn control policies for complex robotic tasks using a single demonstration.
We show that DCIL-II can solve with unprecedented sample efficiency some challenging simulated tasks such as humanoid locomotion and stand-up.
arXiv Detail & Related papers (2022-11-09T10:28:40Z) - Robots Learn Increasingly Complex Tasks with Intrinsic Motivation and
Automatic Curriculum Learning [44.62475518267084]
Multi-task learning by robots poses the challenge of the domain knowledge: complexity of tasks, complexity of the actions required, relationship between tasks for transfer learning.
We demonstrate that this domain knowledge can be learned to address the challenges in life-long learning.
We propose a framework for robots to learn sequences of actions of unbounded complexity in order to achieve multiple control tasks of various complexity.
arXiv Detail & Related papers (2022-02-11T08:14:16Z) - Bottom-Up Skill Discovery from Unsegmented Demonstrations for
Long-Horizon Robot Manipulation [55.31301153979621]
We tackle real-world long-horizon robot manipulation tasks through skill discovery.
We present a bottom-up approach to learning a library of reusable skills from unsegmented demonstrations.
Our method has shown superior performance over state-of-the-art imitation learning methods in multi-stage manipulation tasks.
arXiv Detail & Related papers (2021-09-28T16:18:54Z) - Lifelong Robotic Reinforcement Learning by Retaining Experiences [61.79346922421323]
Many multi-task reinforcement learning efforts assume the robot can collect data from all tasks at all times.
In this work, we study a practical sequential multi-task RL problem motivated by the practical constraints of physical robotic systems.
We derive an approach that effectively leverages the data and policies learned for previous tasks to cumulatively grow the robot's skill-set.
arXiv Detail & Related papers (2021-09-19T18:00:51Z) - Multi-Task Learning with Sequence-Conditioned Transporter Networks [67.57293592529517]
We aim to solve multi-task learning through the lens of sequence-conditioning and weighted sampling.
We propose a new suite of benchmark aimed at compositional tasks, MultiRavens, which allows defining custom task combinations.
Second, we propose a vision-based end-to-end system architecture, Sequence-Conditioned Transporter Networks, which augments Goal-Conditioned Transporter Networks with sequence-conditioning and weighted sampling.
arXiv Detail & Related papers (2021-09-15T21:19:11Z) - Intrinsically Motivated Open-Ended Multi-Task Learning Using Transfer
Learning to Discover Task Hierarchy [0.0]
In open-ended continuous environments, robots need to learn multiple parameterised control tasks in hierarchical reinforcement learning.
We show that the most complex tasks can be learned more easily by transferring knowledge from simpler tasks, and faster by adapting the complexity of the actions to the task.
We propose a task-oriented representation of complex actions, called procedures, to learn online task relationships and unbounded sequences of action primitives to control the different observables of the environment.
arXiv Detail & Related papers (2021-02-19T10:44:08Z) - An Empowerment-based Solution to Robotic Manipulation Tasks with Sparse
Rewards [14.937474939057596]
It is important for robotic manipulators to learn to accomplish tasks even if they are only provided with very sparse instruction signals.
This paper proposes an intrinsic motivation approach that can be easily integrated into any standard reinforcement learning algorithm.
arXiv Detail & Related papers (2020-10-15T19:06:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.