Robot in a China Shop: Using Reinforcement Learning for
Location-Specific Navigation Behaviour
- URL: http://arxiv.org/abs/2106.01434v1
- Date: Wed, 2 Jun 2021 19:31:27 GMT
- Title: Robot in a China Shop: Using Reinforcement Learning for
Location-Specific Navigation Behaviour
- Authors: Xihan Bian and Oscar Mendez and Simon Hadfield
- Abstract summary: We propose a new approach to navigation, where it is treated as a multi-task learning problem.
This enables the robot to learn to behave differently in visual navigation tasks for different environments.
- Score: 24.447207633447363
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Robots need to be able to work in multiple different environments. Even when
performing similar tasks, different behaviour should be deployed to best fit
the current environment. In this paper, We propose a new approach to
navigation, where it is treated as a multi-task learning problem. This enables
the robot to learn to behave differently in visual navigation tasks for
different environments while also learning shared expertise across
environments. We evaluated our approach in both simulated environments as well
as real-world data. Our method allows our system to converge with a 26%
reduction in training time, while also increasing accuracy.
Related papers
- Target Search and Navigation in Heterogeneous Robot Systems with Deep
Reinforcement Learning [3.3167319223959373]
We design a heterogeneous robot system consisting of a UAV and a UGV for search and rescue missions in unknown environments.
The system is able to search for targets and navigate to them in a maze-like mine environment with the policies learned through deep reinforcement learning algorithms.
arXiv Detail & Related papers (2023-08-01T07:09:14Z) - Learning Hierarchical Interactive Multi-Object Search for Mobile
Manipulation [10.21450780640562]
We introduce a novel interactive multi-object search task in which a robot has to open doors to navigate rooms and search inside cabinets and drawers to find target objects.
These new challenges require combining manipulation and navigation skills in unexplored environments.
We present HIMOS, a hierarchical reinforcement learning approach that learns to compose exploration, navigation, and manipulation skills.
arXiv Detail & Related papers (2023-07-12T12:25:33Z) - A Walk in the Park: Learning to Walk in 20 Minutes With Model-Free
Reinforcement Learning [86.06110576808824]
Deep reinforcement learning is a promising approach to learning policies in uncontrolled environments.
Recent advancements in machine learning algorithms and libraries combined with a carefully tuned robot controller lead to learning quadruped in only 20 minutes in the real world.
arXiv Detail & Related papers (2022-08-16T17:37:36Z) - SKILL-IL: Disentangling Skill and Knowledge in Multitask Imitation
Learning [21.222568055417717]
Humans are able to transfer skills and knowledge. If we can cycle to work and drive to the store, we can also cycle to the store and drive to work.
We take inspiration from this and hypothesize the latent memory of a policy network can be disentangled into two partitions.
These contain either the knowledge of the environmental context for the task or the generalizable skill needed to solve the task.
arXiv Detail & Related papers (2022-05-06T10:38:01Z) - Unsupervised Online Learning for Robotic Interestingness with Visual
Memory [9.189959184116962]
We develop a method that automatically adapts online to the environment to report interesting scenes quickly.
We achieve an average of 20% higher accuracy than the state-of-the-art unsupervised methods in a subterranean tunnel environment.
arXiv Detail & Related papers (2021-11-18T16:51:39Z) - Lifelong Robotic Reinforcement Learning by Retaining Experiences [61.79346922421323]
Many multi-task reinforcement learning efforts assume the robot can collect data from all tasks at all times.
In this work, we study a practical sequential multi-task RL problem motivated by the practical constraints of physical robotic systems.
We derive an approach that effectively leverages the data and policies learned for previous tasks to cumulatively grow the robot's skill-set.
arXiv Detail & Related papers (2021-09-19T18:00:51Z) - ReLMM: Practical RL for Learning Mobile Manipulation Skills Using Only
Onboard Sensors [64.2809875343854]
We study how robots can autonomously learn skills that require a combination of navigation and grasping.
Our system, ReLMM, can learn continuously on a real-world platform without any environment instrumentation.
After a grasp curriculum training phase, ReLMM can learn navigation and grasping together fully automatically, in around 40 hours of real-world training.
arXiv Detail & Related papers (2021-07-28T17:59:41Z) - ViNG: Learning Open-World Navigation with Visual Goals [82.84193221280216]
We propose a learning-based navigation system for reaching visually indicated goals.
We show that our system, which we call ViNG, outperforms previously-proposed methods for goal-conditioned reinforcement learning.
We demonstrate ViNG on a number of real-world applications, such as last-mile delivery and warehouse inspection.
arXiv Detail & Related papers (2020-12-17T18:22:32Z) - NavRep: Unsupervised Representations for Reinforcement Learning of Robot
Navigation in Dynamic Human Environments [28.530962677406627]
We train two end-to-end, and 18 unsupervised-learning-based architectures, and compare them, along with existing approaches, in unseen test cases.
Our results show that unsupervised learning methods are competitive with end-to-end methods.
This release also includes OpenAI-gym-compatible environments designed to emulate the training conditions described by other papers.
arXiv Detail & Related papers (2020-12-08T12:51:14Z) - Hierarchical Affordance Discovery using Intrinsic Motivation [69.9674326582747]
We propose an algorithm using intrinsic motivation to guide the learning of affordances for a mobile robot.
This algorithm is capable to autonomously discover, learn and adapt interrelated affordances without pre-programmed actions.
Once learned, these affordances may be used by the algorithm to plan sequences of actions in order to perform tasks of various difficulties.
arXiv Detail & Related papers (2020-09-23T07:18:21Z) - Environment-agnostic Multitask Learning for Natural Language Grounded
Navigation [88.69873520186017]
We introduce a multitask navigation model that can be seamlessly trained on Vision-Language Navigation (VLN) and Navigation from Dialog History (NDH) tasks.
Experiments show that environment-agnostic multitask learning significantly reduces the performance gap between seen and unseen environments.
arXiv Detail & Related papers (2020-03-01T09:06:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.