BADGR: An Autonomous Self-Supervised Learning-Based Navigation System
- URL: http://arxiv.org/abs/2002.05700v2
- Date: Wed, 15 Apr 2020 18:31:03 GMT
- Title: BADGR: An Autonomous Self-Supervised Learning-Based Navigation System
- Authors: Gregory Kahn, Pieter Abbeel, Sergey Levine
- Abstract summary: BadGR is an end-to-end learning-based mobile robot navigation system.
It can be trained with self-supervised off-policy data gathered in real-world environments.
BadGR can navigate in real-world urban and off-road environments with geometrically distracting obstacles.
- Score: 158.6392333480079
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Mobile robot navigation is typically regarded as a geometric problem, in
which the robot's objective is to perceive the geometry of the environment in
order to plan collision-free paths towards a desired goal. However, a purely
geometric view of the world can can be insufficient for many navigation
problems. For example, a robot navigating based on geometry may avoid a field
of tall grass because it believes it is untraversable, and will therefore fail
to reach its desired goal. In this work, we investigate how to move beyond
these purely geometric-based approaches using a method that learns about
physical navigational affordances from experience. Our approach, which we call
BADGR, is an end-to-end learning-based mobile robot navigation system that can
be trained with self-supervised off-policy data gathered in real-world
environments, without any simulation or human supervision. BADGR can navigate
in real-world urban and off-road environments with geometrically distracting
obstacles. It can also incorporate terrain preferences, generalize to novel
environments, and continue to improve autonomously by gathering more data.
Videos, code, and other supplemental material are available on our website
https://sites.google.com/view/badgr
Related papers
- Learning Robotic Navigation from Experience: Principles, Methods, and
Recent Results [94.60414567852536]
Real-world navigation presents a complex set of physical challenges that defies simple geometric abstractions.
Machine learning offers a promising way to go beyond geometry and conventional planning.
We present a toolkit for experiential learning of robotic navigation skills that unifies several recent approaches.
arXiv Detail & Related papers (2022-12-13T17:41:58Z) - Learning Forward Dynamics Model and Informed Trajectory Sampler for Safe
Quadruped Navigation [1.2783783498844021]
A typical SOTA system is composed of four main modules -- mapper, global planner, local planner, and command-tracking controller.
We build a robust and safe local planner which is designed to generate a velocity plan to track a coarsely planned path from the global planner.
Using our framework, a quadruped robot can autonomously navigate in various complex environments without a collision and generate a smoother command plan compared to the baseline method.
arXiv Detail & Related papers (2022-04-19T04:01:44Z) - NavDreams: Towards Camera-Only RL Navigation Among Humans [35.57943738219839]
We investigate whether the world model concept, which has shown results for modeling and learning policies in Atari games, can also be applied to the camera-based navigation problem.
We create simulated environments where a robot must navigate past static and moving humans without colliding in order to reach its goal.
We find that state-of-the-art methods are able to achieve success in solving the navigation problem, and can generate dream-like predictions of future image-sequences.
arXiv Detail & Related papers (2022-03-23T09:46:44Z) - ViKiNG: Vision-Based Kilometer-Scale Navigation with Geographic Hints [94.60414567852536]
Long-range navigation requires both planning and reasoning about local traversability.
We propose a learning-based approach that integrates learning and planning.
ViKiNG can leverage its image-based learned controller and goal-directed to navigate to goals up to 3 kilometers away.
arXiv Detail & Related papers (2022-02-23T02:14:23Z) - Simultaneous Navigation and Construction Benchmarking Environments [73.0706832393065]
We need intelligent robots for mobile construction, the process of navigating in an environment and modifying its structure according to a geometric design.
In this task, a major robot vision and learning challenge is how to exactly achieve the design without GPS.
We benchmark the performance of a handcrafted policy with basic localization and planning, and state-of-the-art deep reinforcement learning methods.
arXiv Detail & Related papers (2021-03-31T00:05:54Z) - ViNG: Learning Open-World Navigation with Visual Goals [82.84193221280216]
We propose a learning-based navigation system for reaching visually indicated goals.
We show that our system, which we call ViNG, outperforms previously-proposed methods for goal-conditioned reinforcement learning.
We demonstrate ViNG on a number of real-world applications, such as last-mile delivery and warehouse inspection.
arXiv Detail & Related papers (2020-12-17T18:22:32Z) - Visual Navigation Among Humans with Optimal Control as a Supervisor [72.5188978268463]
We propose an approach that combines learning-based perception with model-based optimal control to navigate among humans.
Our approach is enabled by our novel data-generation tool, HumANav.
We demonstrate that the learned navigation policies can anticipate and react to humans without explicitly predicting future human motion.
arXiv Detail & Related papers (2020-03-20T16:13:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.