A Few Shot Adaptation of Visual Navigation Skills to New Observations
using Meta-Learning
- URL: http://arxiv.org/abs/2011.03609v3
- Date: Fri, 4 Jun 2021 18:30:22 GMT
- Title: A Few Shot Adaptation of Visual Navigation Skills to New Observations
using Meta-Learning
- Authors: Qian Luo, Maks Sorokin, Sehoon Ha
- Abstract summary: We introduce a learning algorithm that enables rapid adaptation to new sensor configurations or target objects with a few shots.
Our experiments show that our algorithm adapts the learned navigation policy with only three shots for unseen situations.
- Score: 12.771506155747893
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Target-driven visual navigation is a challenging problem that requires a
robot to find the goal using only visual inputs. Many researchers have
demonstrated promising results using deep reinforcement learning (deep RL) on
various robotic platforms, but typical end-to-end learning is known for its
poor extrapolation capability to new scenarios. Therefore, learning a
navigation policy for a new robot with a new sensor configuration or a new
target still remains a challenging problem. In this paper, we introduce a
learning algorithm that enables rapid adaptation to new sensor configurations
or target objects with a few shots. We design a policy architecture with latent
features between perception and inference networks and quickly adapt the
perception network via meta-learning while freezing the inference network. Our
experiments show that our algorithm adapts the learned navigation policy with
only three shots for unseen situations with different sensor configurations or
different target colors. We also analyze the proposed algorithm by
investigating various hyperparameters.
Related papers
- NoMaD: Goal Masked Diffusion Policies for Navigation and Exploration [57.15811390835294]
This paper describes how we can train a single unified diffusion policy to handle both goal-directed navigation and goal-agnostic exploration.
We show that this unified policy results in better overall performance when navigating to visually indicated goals in novel environments.
Our experiments, conducted on a real-world mobile robot platform, show effective navigation in unseen environments in comparison with five alternative methods.
arXiv Detail & Related papers (2023-10-11T21:07:14Z) - Robot path planning using deep reinforcement learning [0.0]
Reinforcement learning methods offer an alternative to map-free navigation tasks.
Deep reinforcement learning agents are implemented for both the obstacle avoidance and the goal-oriented navigation task.
An analysis of the changes in the behaviour and performance of the agents caused by modifications in the reward function is conducted.
arXiv Detail & Related papers (2023-02-17T20:08:59Z) - See What the Robot Can't See: Learning Cooperative Perception for Visual
Navigation [11.943412856714154]
We train the sensors to encode and communicate relevant viewpoint information to the mobile robot.
We overcome the challenge of enabling all the sensors to predict the direction along the shortest path to the target.
Our results show that by using communication between the sensors and the robot, we achieve up to 2.0x improvement in SPL.
arXiv Detail & Related papers (2022-08-01T11:37:01Z) - Infrared Small-Dim Target Detection with Transformer under Complex
Backgrounds [155.388487263872]
We propose a new infrared small-dim target detection method with the transformer.
We adopt the self-attention mechanism of the transformer to learn the interaction information of image features in a larger range.
We also design a feature enhancement module to learn more features of small-dim targets.
arXiv Detail & Related papers (2021-09-29T12:23:41Z) - Deep Learning for Embodied Vision Navigation: A Survey [108.13766213265069]
"Embodied visual navigation" problem requires an agent to navigate in a 3D environment mainly rely on its first-person observation.
This paper attempts to establish an outline of the current works in the field of embodied visual navigation by providing a comprehensive literature survey.
arXiv Detail & Related papers (2021-07-07T12:09:04Z) - XAI-N: Sensor-based Robot Navigation using Expert Policies and Decision
Trees [55.9643422180256]
We present a novel sensor-based learning navigation algorithm to compute a collision-free trajectory for a robot in dense and dynamic environments.
Our approach uses deep reinforcement learning-based expert policy that is trained using a sim2real paradigm.
We highlight the benefits of our algorithm in simulated environments and navigating a Clearpath Jackal robot among moving pedestrians.
arXiv Detail & Related papers (2021-04-22T01:33:10Z) - ViNG: Learning Open-World Navigation with Visual Goals [82.84193221280216]
We propose a learning-based navigation system for reaching visually indicated goals.
We show that our system, which we call ViNG, outperforms previously-proposed methods for goal-conditioned reinforcement learning.
We demonstrate ViNG on a number of real-world applications, such as last-mile delivery and warehouse inspection.
arXiv Detail & Related papers (2020-12-17T18:22:32Z) - Visual Perception Generalization for Vision-and-Language Navigation via
Meta-Learning [9.519596058757033]
Vision-and-language navigation (VLN) is a challenging task that requires an agent to navigate in real-world environments by understanding natural language instructions and visual information received in real-time.
We propose a visual perception generalization strategy based on meta-learning, which enables the agent to fast adapt to a new camera configuration with a few shots.
arXiv Detail & Related papers (2020-12-10T04:10:04Z) - Robot Perception enables Complex Navigation Behavior via Self-Supervised
Learning [23.54696982881734]
We propose an approach to unify successful robot perception systems for active target-driven navigation tasks via reinforcement learning (RL)
Our method temporally incorporates compact motion and visual perception data, directly obtained using self-supervision from a single image sequence.
We demonstrate our approach on two real-world driving dataset, KITTI and Oxford RobotCar, using the new interactive CityLearn framework.
arXiv Detail & Related papers (2020-06-16T07:45:47Z) - Improving Target-driven Visual Navigation with Attention on 3D Spatial
Relationships [52.72020203771489]
We investigate target-driven visual navigation using deep reinforcement learning (DRL) in 3D indoor scenes.
Our proposed method combines visual features and 3D spatial representations to learn navigation policy.
Our experiments, performed in the AI2-THOR, show that our model outperforms the baselines in both SR and SPL metrics.
arXiv Detail & Related papers (2020-04-29T08:46:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.