Integrating Egocentric Localization for More Realistic Point-Goal
Navigation Agents
- URL: http://arxiv.org/abs/2009.03231v1
- Date: Mon, 7 Sep 2020 16:52:47 GMT
- Title: Integrating Egocentric Localization for More Realistic Point-Goal
Navigation Agents
- Authors: Samyak Datta, Oleksandr Maksymets, Judy Hoffman, Stefan Lee, Dhruv
Batra, Devi Parikh
- Abstract summary: We develop point-goal navigation agents that rely on visual estimates of egomotion under noisy action dynamics.
Our agent was the runner-up in the PointNav track of CVPR 2020 Habitat Challenge.
- Score: 90.65480527538723
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent work has presented embodied agents that can navigate to point-goal
targets in novel indoor environments with near-perfect accuracy. However, these
agents are equipped with idealized sensors for localization and take
deterministic actions. This setting is practically sterile by comparison to the
dirty reality of noisy sensors and actuations in the real world -- wheels can
slip, motion sensors have error, actuations can rebound. In this work, we take
a step towards this noisy reality, developing point-goal navigation agents that
rely on visual estimates of egomotion under noisy action dynamics. We find
these agents outperform naive adaptions of current point-goal agents to this
setting as well as those incorporating classic localization baselines. Further,
our model conceptually divides learning agent dynamics or odometry (where am
I?) from task-specific navigation policy (where do I want to go?). This enables
a seamless adaption to changing dynamics (a different robot or floor type) by
simply re-calibrating the visual odometry model -- circumventing the expense of
re-training of the navigation policy. Our agent was the runner-up in the
PointNav track of CVPR 2020 Habitat Challenge.
Related papers
- Learning to navigate efficiently and precisely in real environments [14.52507964172957]
Embodied AI literature focuses on end-to-end agents trained in simulators like Habitat or AI-Thor.
In this work we explore end-to-end training of agents in simulation in settings which minimize the sim2real gap.
arXiv Detail & Related papers (2024-01-25T17:50:05Z) - Dfferentiable Raycasting for Self-supervised Occupancy Forecasting [52.61762537741392]
Motion planning for autonomous driving requires learning how the environment around an ego-vehicle evolves with time.
In this paper, we use geometric occupancy as a natural alternative to view-dependent representations such as freespace.
Our key insight is to use differentiable raycasting to "render" future occupancy predictions into future LiDAR sweep predictions.
arXiv Detail & Related papers (2022-10-04T21:35:21Z) - Self-Supervised Domain Adaptation for Visual Navigation with Global Map
Consistency [6.385006149689549]
We propose a self-supervised adaptation for a visual navigation agent to generalize to unseen environment.
The proposed task is completely self-supervised, not requiring any supervision from ground-truth pose data or explicit noise model.
Our experiments show that the proposed task helps the agent to successfully transfer to new, noisy environments.
arXiv Detail & Related papers (2021-10-14T07:14:36Z) - Pushing it out of the Way: Interactive Visual Navigation [62.296686176988125]
We study the problem of interactive navigation where agents learn to change the environment to navigate more efficiently to their goals.
We introduce the Neural Interaction Engine (NIE) to explicitly predict the change in the environment caused by the agent's actions.
By modeling the changes while planning, we find that agents exhibit significant improvements in their navigational capabilities.
arXiv Detail & Related papers (2021-04-28T22:46:41Z) - Success Weighted by Completion Time: A Dynamics-Aware Evaluation
Criteria for Embodied Navigation [42.978177196888225]
We present Success weighted by Completion Time (SCT), a new metric for evaluating navigation performance for mobile robots.
We also present RRT*-Unicycle, an algorithm for unicycle dynamics that estimates the fastest collision-free path and completion time.
arXiv Detail & Related papers (2021-03-14T20:13:06Z) - Guided Navigation from Multiple Viewpoints using Qualitative Spatial
Reasoning [0.0]
This work aims to develop algorithms capable of guiding a sensory deprived robot to a goal location.
The main task considered in this work is, given a group of autonomous agents, the development and evaluation of algorithms capable of producing a set of high-level commands.
arXiv Detail & Related papers (2020-11-03T00:34:26Z) - Occupancy Anticipation for Efficient Exploration and Navigation [97.17517060585875]
We propose occupancy anticipation, where the agent uses its egocentric RGB-D observations to infer the occupancy state beyond the visible regions.
By exploiting context in both the egocentric views and top-down maps our model successfully anticipates a broader map of the environment.
Our approach is the winning entry in the 2020 Habitat PointNav Challenge.
arXiv Detail & Related papers (2020-08-21T03:16:51Z) - Improving Target-driven Visual Navigation with Attention on 3D Spatial
Relationships [52.72020203771489]
We investigate target-driven visual navigation using deep reinforcement learning (DRL) in 3D indoor scenes.
Our proposed method combines visual features and 3D spatial representations to learn navigation policy.
Our experiments, performed in the AI2-THOR, show that our model outperforms the baselines in both SR and SPL metrics.
arXiv Detail & Related papers (2020-04-29T08:46:38Z) - Learning to Move with Affordance Maps [57.198806691838364]
The ability to autonomously explore and navigate a physical space is a fundamental requirement for virtually any mobile autonomous agent.
Traditional SLAM-based approaches for exploration and navigation largely focus on leveraging scene geometry.
We show that learned affordance maps can be used to augment traditional approaches for both exploration and navigation, providing significant improvements in performance.
arXiv Detail & Related papers (2020-01-08T04:05:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.