ReLMM: Practical RL for Learning Mobile Manipulation Skills Using Only
Onboard Sensors
- URL: http://arxiv.org/abs/2107.13545v1
- Date: Wed, 28 Jul 2021 17:59:41 GMT
- Title: ReLMM: Practical RL for Learning Mobile Manipulation Skills Using Only
Onboard Sensors
- Authors: Charles Sun, J\k{e}drzej Orbik, Coline Devin, Brian Yang, Abhishek
Gupta, Glen Berseth, Sergey Levine
- Abstract summary: We study how robots can autonomously learn skills that require a combination of navigation and grasping.
Our system, ReLMM, can learn continuously on a real-world platform without any environment instrumentation.
After a grasp curriculum training phase, ReLMM can learn navigation and grasping together fully automatically, in around 40 hours of real-world training.
- Score: 64.2809875343854
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In this paper, we study how robots can autonomously learn skills that require
a combination of navigation and grasping. Learning robotic skills in the real
world remains challenging without large-scale data collection and supervision.
Our aim is to devise a robotic reinforcement learning system for learning
navigation and manipulation together, in an \textit{autonomous} way without
human intervention, enabling continual learning under realistic assumptions.
Specifically, our system, ReLMM, can learn continuously on a real-world
platform without any environment instrumentation, without human intervention,
and without access to privileged information, such as maps, objects positions,
or a global view of the environment. Our method employs a modularized policy
with components for manipulation and navigation, where uncertainty over the
manipulation success drives exploration for the navigation controller, and the
manipulation module provides rewards for navigation. We evaluate our method on
a room cleanup task, where the robot must navigate to and pick up items of
scattered on the floor. After a grasp curriculum training phase, ReLMM can
learn navigation and grasping together fully automatically, in around 40 hours
of real-world training.
Related papers
- Autonomous Robotic Reinforcement Learning with Asynchronous Human
Feedback [27.223725464754853]
GEAR enables robots to be placed in real-world environments and left to train autonomously without interruption.
System streams robot experience to a web interface only requiring occasional asynchronous feedback from remote, crowdsourced, non-expert humans.
arXiv Detail & Related papers (2023-10-31T16:43:56Z) - A Study on Learning Social Robot Navigation with Multimodal Perception [6.052803245103173]
We present a study on learning social robot navigation with multimodal perception using a large-scale real-world dataset.
We compare unimodal and multimodal learning approaches against a set of classical navigation approaches in different social scenarios.
The results show that multimodal learning has a clear advantage over unimodal learning in both dataset and human studies.
arXiv Detail & Related papers (2023-09-22T01:47:47Z) - Dexterous Manipulation from Images: Autonomous Real-World RL via Substep
Guidance [71.36749876465618]
We describe a system for vision-based dexterous manipulation that provides a "programming-free" approach for users to define new tasks.
Our system includes a framework for users to define a final task and intermediate sub-tasks with image examples.
experimental results with a four-finger robotic hand learning multi-stage object manipulation tasks directly in the real world.
arXiv Detail & Related papers (2022-12-19T22:50:40Z) - Gesture2Path: Imitation Learning for Gesture-aware Navigation [54.570943577423094]
We present Gesture2Path, a novel social navigation approach that combines image-based imitation learning with model-predictive control.
We deploy our method on real robots and showcase the effectiveness of our approach for the four gestures-navigation scenarios.
arXiv Detail & Related papers (2022-09-19T23:05:36Z) - Human-Aware Robot Navigation via Reinforcement Learning with Hindsight
Experience Replay and Curriculum Learning [28.045441768064215]
Reinforcement learning approaches have shown superior ability in solving sequential decision making problems.
In this work, we consider the task of training an RL agent without employing the demonstration data.
We propose to incorporate the hindsight experience replay (HER) and curriculum learning (CL) techniques with RL to efficiently learn the optimal navigation policy in the dense crowd.
arXiv Detail & Related papers (2021-10-09T13:18:11Z) - ViNG: Learning Open-World Navigation with Visual Goals [82.84193221280216]
We propose a learning-based navigation system for reaching visually indicated goals.
We show that our system, which we call ViNG, outperforms previously-proposed methods for goal-conditioned reinforcement learning.
We demonstrate ViNG on a number of real-world applications, such as last-mile delivery and warehouse inspection.
arXiv Detail & Related papers (2020-12-17T18:22:32Z) - LaND: Learning to Navigate from Disengagements [158.6392333480079]
We present a reinforcement learning approach for learning to navigate from disengagements, or LaND.
LaND learns a neural network model that predicts which actions lead to disengagements given the current sensory observation, and then at test time plans and executes actions that avoid disengagements.
Our results demonstrate LaND can successfully learn to navigate in diverse, real world sidewalk environments, outperforming both imitation learning and reinforcement learning approaches.
arXiv Detail & Related papers (2020-10-09T17:21:42Z) - Embodied Visual Navigation with Automatic Curriculum Learning in Real
Environments [20.017277077448924]
NavACL is a method of automatic curriculum learning tailored to the navigation task.
Deep reinforcement learning agents trained using NavACL significantly outperform state-of-the-art agents trained with uniform sampling.
Our agents can navigate through unknown cluttered indoor environments to semantically-specified targets using only RGB images.
arXiv Detail & Related papers (2020-09-11T13:28:26Z) - Robot Perception enables Complex Navigation Behavior via Self-Supervised
Learning [23.54696982881734]
We propose an approach to unify successful robot perception systems for active target-driven navigation tasks via reinforcement learning (RL)
Our method temporally incorporates compact motion and visual perception data, directly obtained using self-supervision from a single image sequence.
We demonstrate our approach on two real-world driving dataset, KITTI and Oxford RobotCar, using the new interactive CityLearn framework.
arXiv Detail & Related papers (2020-06-16T07:45:47Z) - The Ingredients of Real-World Robotic Reinforcement Learning [71.92831985295163]
We discuss the elements that are needed for a robotic learning system that can continually and autonomously improve with data collected in the real world.
We propose a particular instantiation of such a system, using dexterous manipulation as our case study.
We demonstrate that our complete system can learn without any human intervention, acquiring a variety of vision-based skills with a real-world three-fingered hand.
arXiv Detail & Related papers (2020-04-27T03:36:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.