Rapid Exploration for Open-World Navigation with Latent Goal Models
- URL: http://arxiv.org/abs/2104.05859v5
- Date: Wed, 11 Oct 2023 09:07:01 GMT
- Title: Rapid Exploration for Open-World Navigation with Latent Goal Models
- Authors: Dhruv Shah, Benjamin Eysenbach, Gregory Kahn, Nicholas Rhinehart,
Sergey Levine
- Abstract summary: We describe a robotic learning system for autonomous exploration and navigation in diverse, open-world environments.
At the core of our method is a learned latent variable model of distances and actions, along with a non-parametric topological memory of images.
We use an information bottleneck to regularize the learned policy, giving us (i) a compact visual representation of goals, (ii) improved generalization capabilities, and (iii) a mechanism for sampling feasible goals for exploration.
- Score: 78.45339342966196
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We describe a robotic learning system for autonomous exploration and
navigation in diverse, open-world environments. At the core of our method is a
learned latent variable model of distances and actions, along with a
non-parametric topological memory of images. We use an information bottleneck
to regularize the learned policy, giving us (i) a compact visual representation
of goals, (ii) improved generalization capabilities, and (iii) a mechanism for
sampling feasible goals for exploration. Trained on a large offline dataset of
prior experience, the model acquires a representation of visual goals that is
robust to task-irrelevant distractors. We demonstrate our method on a mobile
ground robot in open-world exploration scenarios. Given an image of a goal that
is up to 80 meters away, our method leverages its representation to explore and
discover the goal in under 20 minutes, even amidst previously-unseen obstacles
and weather conditions. Please check out the project website for videos of our
experiments and information about the real-world dataset used at
https://sites.google.com/view/recon-robot.
Related papers
- NoMaD: Goal Masked Diffusion Policies for Navigation and Exploration [57.15811390835294]
This paper describes how we can train a single unified diffusion policy to handle both goal-directed navigation and goal-agnostic exploration.
We show that this unified policy results in better overall performance when navigating to visually indicated goals in novel environments.
Our experiments, conducted on a real-world mobile robot platform, show effective navigation in unseen environments in comparison with five alternative methods.
arXiv Detail & Related papers (2023-10-11T21:07:14Z) - Autonomous Marker-less Rapid Aerial Grasping [5.892028494793913]
We propose a vision-based system for autonomous rapid aerial grasping.
We generate a dense point cloud of the detected objects and perform geometry-based grasp planning.
We show the first use of geometry-based grasping techniques with a flying platform.
arXiv Detail & Related papers (2022-11-23T16:25:49Z) - GNM: A General Navigation Model to Drive Any Robot [67.40225397212717]
General goal-conditioned model for vision-based navigation can be trained on data obtained from many distinct but structurally similar robots.
We analyze the necessary design decisions for effective data sharing across robots.
We deploy the trained GNM on a range of new robots, including an under quadrotor.
arXiv Detail & Related papers (2022-10-07T07:26:41Z) - Unsupervised Online Learning for Robotic Interestingness with Visual
Memory [9.189959184116962]
We develop a method that automatically adapts online to the environment to report interesting scenes quickly.
We achieve an average of 20% higher accuracy than the state-of-the-art unsupervised methods in a subterranean tunnel environment.
arXiv Detail & Related papers (2021-11-18T16:51:39Z) - Actionable Models: Unsupervised Offline Reinforcement Learning of
Robotic Skills [93.12417203541948]
We propose the objective of learning a functional understanding of the environment by learning to reach any goal state in a given dataset.
We find that our method can operate on high-dimensional camera images and learn a variety of skills on real robots that generalize to previously unseen scenes and objects.
arXiv Detail & Related papers (2021-04-15T20:10:11Z) - Model-Based Visual Planning with Self-Supervised Functional Distances [104.83979811803466]
We present a self-supervised method for model-based visual goal reaching.
Our approach learns entirely using offline, unlabeled data.
We find that this approach substantially outperforms both model-free and model-based prior methods.
arXiv Detail & Related papers (2020-12-30T23:59:09Z) - ViNG: Learning Open-World Navigation with Visual Goals [82.84193221280216]
We propose a learning-based navigation system for reaching visually indicated goals.
We show that our system, which we call ViNG, outperforms previously-proposed methods for goal-conditioned reinforcement learning.
We demonstrate ViNG on a number of real-world applications, such as last-mile delivery and warehouse inspection.
arXiv Detail & Related papers (2020-12-17T18:22:32Z) - Robot Perception enables Complex Navigation Behavior via Self-Supervised
Learning [23.54696982881734]
We propose an approach to unify successful robot perception systems for active target-driven navigation tasks via reinforcement learning (RL)
Our method temporally incorporates compact motion and visual perception data, directly obtained using self-supervision from a single image sequence.
We demonstrate our approach on two real-world driving dataset, KITTI and Oxford RobotCar, using the new interactive CityLearn framework.
arXiv Detail & Related papers (2020-06-16T07:45:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.