Self-Supervised Domain Adaptation for Visual Navigation with Global Map
Consistency
- URL: http://arxiv.org/abs/2110.07184v1
- Date: Thu, 14 Oct 2021 07:14:36 GMT
- Title: Self-Supervised Domain Adaptation for Visual Navigation with Global Map
Consistency
- Authors: Eun Sun Lee, Junho Kim, and Young Min Kim
- Abstract summary: We propose a self-supervised adaptation for a visual navigation agent to generalize to unseen environment.
The proposed task is completely self-supervised, not requiring any supervision from ground-truth pose data or explicit noise model.
Our experiments show that the proposed task helps the agent to successfully transfer to new, noisy environments.
- Score: 6.385006149689549
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We propose a light-weight, self-supervised adaptation for a visual navigation
agent to generalize to unseen environment. Given an embodied agent trained in a
noiseless environment, our objective is to transfer the agent to a noisy
environment where actuation and odometry sensor noise is present. Our method
encourages the agent to maximize the consistency between the global maps
generated at different time steps in a round-trip trajectory. The proposed task
is completely self-supervised, not requiring any supervision from ground-truth
pose data or explicit noise model. In addition, optimization of the task
objective is extremely light-weight, as training terminates within a few
minutes on a commodity GPU. Our experiments show that the proposed task helps
the agent to successfully transfer to new, noisy environments. The transferred
agent exhibits improved localization and mapping accuracy, further leading to
enhanced performance in downstream visual navigation tasks. Moreover, we
demonstrate test-time adaptation with our self-supervised task to show its
potential applicability in real-world deployment.
Related papers
- Improving Zero-Shot ObjectNav with Generative Communication [60.84730028539513]
We propose a new method for improving zero-shot ObjectNav.
Our approach takes into account that the ground agent may have limited and sometimes obstructed view.
arXiv Detail & Related papers (2024-08-03T22:55:26Z) - Active Sensing with Predictive Coding and Uncertainty Minimization [0.0]
We present an end-to-end procedure for embodied exploration inspired by two biological computations.
We first demonstrate our approach in a maze navigation task and show that it can discover the underlying transition distributions and spatial features of the environment.
We show that our model builds unsupervised representations through exploration that allow it to efficiently categorize visual scenes.
arXiv Detail & Related papers (2023-07-02T21:14:49Z) - Masked Path Modeling for Vision-and-Language Navigation [41.7517631477082]
Vision-and-language navigation (VLN) agents are trained to navigate in real-world environments by following natural language instructions.
Previous approaches have attempted to address this issue by introducing additional supervision during training.
We introduce a masked path modeling (MPM) objective, which pretrains an agent using self-collected data for downstream navigation tasks.
arXiv Detail & Related papers (2023-05-23T17:20:20Z) - Lifelong Unsupervised Domain Adaptive Person Re-identification with
Coordinated Anti-forgetting and Adaptation [127.6168183074427]
We propose a new task, Lifelong Unsupervised Domain Adaptive (LUDA) person ReID.
This is challenging because it requires the model to continuously adapt to unlabeled data of the target environments.
We design an effective scheme for this task, dubbed CLUDA-ReID, where the anti-forgetting is harmoniously coordinated with the adaptation.
arXiv Detail & Related papers (2021-12-13T13:19:45Z) - Multitask Adaptation by Retrospective Exploration with Learned World
Models [77.34726150561087]
We propose a meta-learned addressing model called RAMa that provides training samples for the MBRL agent taken from task-agnostic storage.
The model is trained to maximize the expected agent's performance by selecting promising trajectories solving prior tasks from the storage.
arXiv Detail & Related papers (2021-10-25T20:02:57Z) - Pushing it out of the Way: Interactive Visual Navigation [62.296686176988125]
We study the problem of interactive navigation where agents learn to change the environment to navigate more efficiently to their goals.
We introduce the Neural Interaction Engine (NIE) to explicitly predict the change in the environment caused by the agent's actions.
By modeling the changes while planning, we find that agents exhibit significant improvements in their navigational capabilities.
arXiv Detail & Related papers (2021-04-28T22:46:41Z) - Embodied Visual Active Learning for Semantic Segmentation [33.02424587900808]
We study the task of embodied visual active learning, where an agent is set to explore a 3d environment with the goal to acquire visual scene understanding.
We develop a battery of agents - both learnt and pre-specified - and with different levels of knowledge of the environment.
We extensively evaluate the proposed models using the Matterport3D simulator and show that a fully learnt method outperforms comparable pre-specified counterparts.
arXiv Detail & Related papers (2020-12-17T11:02:34Z) - Integrating Egocentric Localization for More Realistic Point-Goal
Navigation Agents [90.65480527538723]
We develop point-goal navigation agents that rely on visual estimates of egomotion under noisy action dynamics.
Our agent was the runner-up in the PointNav track of CVPR 2020 Habitat Challenge.
arXiv Detail & Related papers (2020-09-07T16:52:47Z) - Meta Reinforcement Learning with Autonomous Inference of Subtask
Dependencies [57.27944046925876]
We propose and address a novel few-shot RL problem, where a task is characterized by a subtask graph.
Instead of directly learning a meta-policy, we develop a Meta-learner with Subtask Graph Inference.
Our experiment results on two grid-world domains and StarCraft II environments show that the proposed method is able to accurately infer the latent task parameter.
arXiv Detail & Related papers (2020-01-01T17:34:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.