LaND: Learning to Navigate from Disengagements
- URL: http://arxiv.org/abs/2010.04689v1
- Date: Fri, 9 Oct 2020 17:21:42 GMT
- Title: LaND: Learning to Navigate from Disengagements
- Authors: Gregory Kahn, Pieter Abbeel, Sergey Levine
- Abstract summary: We present a reinforcement learning approach for learning to navigate from disengagements, or LaND.
LaND learns a neural network model that predicts which actions lead to disengagements given the current sensory observation, and then at test time plans and executes actions that avoid disengagements.
Our results demonstrate LaND can successfully learn to navigate in diverse, real world sidewalk environments, outperforming both imitation learning and reinforcement learning approaches.
- Score: 158.6392333480079
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Consistently testing autonomous mobile robots in real world scenarios is a
necessary aspect of developing autonomous navigation systems. Each time the
human safety monitor disengages the robot's autonomy system due to the robot
performing an undesirable maneuver, the autonomy developers gain insight into
how to improve the autonomy system. However, we believe that these
disengagements not only show where the system fails, which is useful for
troubleshooting, but also provide a direct learning signal by which the robot
can learn to navigate. We present a reinforcement learning approach for
learning to navigate from disengagements, or LaND. LaND learns a neural network
model that predicts which actions lead to disengagements given the current
sensory observation, and then at test time plans and executes actions that
avoid disengagements. Our results demonstrate LaND can successfully learn to
navigate in diverse, real world sidewalk environments, outperforming both
imitation learning and reinforcement learning approaches. Videos, code, and
other material are available on our website
https://sites.google.com/view/sidewalk-learning
Related papers
- SELFI: Autonomous Self-Improvement with Reinforcement Learning for Social Navigation [54.97931304488993]
Self-improving robots that interact and improve with experience are key to the real-world deployment of robotic systems.
We propose an online learning method, SELFI, that leverages online robot experience to rapidly fine-tune pre-trained control policies.
We report improvements in terms of collision avoidance, as well as more socially compliant behavior, measured by a human user study.
arXiv Detail & Related papers (2024-03-01T21:27:03Z) - Autonomous Robotic Reinforcement Learning with Asynchronous Human
Feedback [27.223725464754853]
GEAR enables robots to be placed in real-world environments and left to train autonomously without interruption.
System streams robot experience to a web interface only requiring occasional asynchronous feedback from remote, crowdsourced, non-expert humans.
arXiv Detail & Related papers (2023-10-31T16:43:56Z) - Self-Improving Robots: End-to-End Autonomous Visuomotor Reinforcement
Learning [54.636562516974884]
In imitation and reinforcement learning, the cost of human supervision limits the amount of data that robots can be trained on.
In this work, we propose MEDAL++, a novel design for self-improving robotic systems.
The robot autonomously practices the task by learning to both do and undo the task, simultaneously inferring the reward function from the demonstrations.
arXiv Detail & Related papers (2023-03-02T18:51:38Z) - Gesture2Path: Imitation Learning for Gesture-aware Navigation [54.570943577423094]
We present Gesture2Path, a novel social navigation approach that combines image-based imitation learning with model-predictive control.
We deploy our method on real robots and showcase the effectiveness of our approach for the four gestures-navigation scenarios.
arXiv Detail & Related papers (2022-09-19T23:05:36Z) - Intention Aware Robot Crowd Navigation with Attention-Based Interaction
Graph [3.8461692052415137]
We study the problem of safe and intention-aware robot navigation in dense and interactive crowds.
We propose a novel recurrent graph neural network with attention mechanisms to capture heterogeneous interactions among agents.
We demonstrate that our method enables the robot to achieve good navigation performance and non-invasiveness in challenging crowd navigation scenarios.
arXiv Detail & Related papers (2022-03-03T16:26:36Z) - Brain-Inspired Deep Imitation Learning for Autonomous Driving Systems [0.38673630752805443]
Humans have a strong generalisation ability which is beneficial from the structural and functional asymmetry of the two sides of the brain.
Here, we design dual Neural Circuit Policy (NCP) architectures in deep neural networks based on the asymmetry of human neural networks.
Experimental results demonstrate that our brain-inspired method outperforms existing methods regarding generalisation when dealing with unseen data.
arXiv Detail & Related papers (2021-07-30T14:21:46Z) - ReLMM: Practical RL for Learning Mobile Manipulation Skills Using Only
Onboard Sensors [64.2809875343854]
We study how robots can autonomously learn skills that require a combination of navigation and grasping.
Our system, ReLMM, can learn continuously on a real-world platform without any environment instrumentation.
After a grasp curriculum training phase, ReLMM can learn navigation and grasping together fully automatically, in around 40 hours of real-world training.
arXiv Detail & Related papers (2021-07-28T17:59:41Z) - Robot Perception enables Complex Navigation Behavior via Self-Supervised
Learning [23.54696982881734]
We propose an approach to unify successful robot perception systems for active target-driven navigation tasks via reinforcement learning (RL)
Our method temporally incorporates compact motion and visual perception data, directly obtained using self-supervision from a single image sequence.
We demonstrate our approach on two real-world driving dataset, KITTI and Oxford RobotCar, using the new interactive CityLearn framework.
arXiv Detail & Related papers (2020-06-16T07:45:47Z) - The Ingredients of Real-World Robotic Reinforcement Learning [71.92831985295163]
We discuss the elements that are needed for a robotic learning system that can continually and autonomously improve with data collected in the real world.
We propose a particular instantiation of such a system, using dexterous manipulation as our case study.
We demonstrate that our complete system can learn without any human intervention, acquiring a variety of vision-based skills with a real-world three-fingered hand.
arXiv Detail & Related papers (2020-04-27T03:36:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.