ViNG: Learning Open-World Navigation with Visual Goals
- URL: http://arxiv.org/abs/2012.09812v2
- Date: Fri, 26 Mar 2021 11:12:28 GMT
- Title: ViNG: Learning Open-World Navigation with Visual Goals
- Authors: Dhruv Shah, Benjamin Eysenbach, Gregory Kahn, Nicholas Rhinehart,
Sergey Levine
- Abstract summary: We propose a learning-based navigation system for reaching visually indicated goals.
We show that our system, which we call ViNG, outperforms previously-proposed methods for goal-conditioned reinforcement learning.
We demonstrate ViNG on a number of real-world applications, such as last-mile delivery and warehouse inspection.
- Score: 82.84193221280216
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose a learning-based navigation system for reaching visually indicated
goals and demonstrate this system on a real mobile robot platform. Learning
provides an appealing alternative to conventional methods for robotic
navigation: instead of reasoning about environments in terms of geometry and
maps, learning can enable a robot to learn about navigational affordances,
understand what types of obstacles are traversable (e.g., tall grass) or not
(e.g., walls), and generalize over patterns in the environment. However, unlike
conventional planning algorithms, it is harder to change the goal for a learned
policy during deployment. We propose a method for learning to navigate towards
a goal image of the desired destination. By combining a learned policy with a
topological graph constructed out of previously observed data, our system can
determine how to reach this visually indicated goal even in the presence of
variable appearance and lighting. Three key insights, waypoint proposal, graph
pruning and negative mining, enable our method to learn to navigate in
real-world environments using only offline data, a setting where prior methods
struggle. We instantiate our method on a real outdoor ground robot and show
that our system, which we call ViNG, outperforms previously-proposed methods
for goal-conditioned reinforcement learning, including other methods that
incorporate reinforcement learning and search. We also study how \sysName
generalizes to unseen environments and evaluate its ability to adapt to such an
environment with growing experience. Finally, we demonstrate ViNG on a number
of real-world applications, such as last-mile delivery and warehouse
inspection. We encourage the reader to visit the project website for videos of
our experiments and demonstrations sites.google.com/view/ving-robot.
Related papers
- NoMaD: Goal Masked Diffusion Policies for Navigation and Exploration [57.15811390835294]
This paper describes how we can train a single unified diffusion policy to handle both goal-directed navigation and goal-agnostic exploration.
We show that this unified policy results in better overall performance when navigating to visually indicated goals in novel environments.
Our experiments, conducted on a real-world mobile robot platform, show effective navigation in unseen environments in comparison with five alternative methods.
arXiv Detail & Related papers (2023-10-11T21:07:14Z) - Navigating to Objects in the Real World [76.1517654037993]
We present a large-scale empirical study of semantic visual navigation methods comparing methods from classical, modular, and end-to-end learning approaches.
We find that modular learning works well in the real world, attaining a 90% success rate.
In contrast, end-to-end learning does not, dropping from 77% simulation to 23% real-world success rate due to a large image domain gap between simulation and reality.
arXiv Detail & Related papers (2022-12-02T01:10:47Z) - Lifelong Topological Visual Navigation [16.41858724205884]
We propose a learning-based visual navigation method with graph update strategies that improve lifelong navigation performance over time.
We take inspiration from sampling-based planning algorithms to build image-based topological graphs, resulting in sparser graphs yet with higher navigation performance compared to baseline methods.
Unlike controllers that learn from fixed training environments, we show that our model can be finetuned using a relatively small dataset from the real-world environment where the robot is deployed.
arXiv Detail & Related papers (2021-10-16T06:16:14Z) - Actionable Models: Unsupervised Offline Reinforcement Learning of
Robotic Skills [93.12417203541948]
We propose the objective of learning a functional understanding of the environment by learning to reach any goal state in a given dataset.
We find that our method can operate on high-dimensional camera images and learn a variety of skills on real robots that generalize to previously unseen scenes and objects.
arXiv Detail & Related papers (2021-04-15T20:10:11Z) - Rapid Exploration for Open-World Navigation with Latent Goal Models [78.45339342966196]
We describe a robotic learning system for autonomous exploration and navigation in diverse, open-world environments.
At the core of our method is a learned latent variable model of distances and actions, along with a non-parametric topological memory of images.
We use an information bottleneck to regularize the learned policy, giving us (i) a compact visual representation of goals, (ii) improved generalization capabilities, and (iii) a mechanism for sampling feasible goals for exploration.
arXiv Detail & Related papers (2021-04-12T23:14:41Z) - NavRep: Unsupervised Representations for Reinforcement Learning of Robot
Navigation in Dynamic Human Environments [28.530962677406627]
We train two end-to-end, and 18 unsupervised-learning-based architectures, and compare them, along with existing approaches, in unseen test cases.
Our results show that unsupervised learning methods are competitive with end-to-end methods.
This release also includes OpenAI-gym-compatible environments designed to emulate the training conditions described by other papers.
arXiv Detail & Related papers (2020-12-08T12:51:14Z) - BADGR: An Autonomous Self-Supervised Learning-Based Navigation System [158.6392333480079]
BadGR is an end-to-end learning-based mobile robot navigation system.
It can be trained with self-supervised off-policy data gathered in real-world environments.
BadGR can navigate in real-world urban and off-road environments with geometrically distracting obstacles.
arXiv Detail & Related papers (2020-02-13T18:40:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.