APPLD: Adaptive Planner Parameter Learning from Demonstration
- URL: http://arxiv.org/abs/2004.00116v4
- Date: Wed, 15 Jul 2020 18:35:10 GMT
- Title: APPLD: Adaptive Planner Parameter Learning from Demonstration
- Authors: Xuesu Xiao, Bo Liu, Garrett Warnell, Jonathan Fink, Peter Stone
- Abstract summary: We introduce APPLD, Adaptive Planner Learning from Demonstration, that allows existing navigation systems to be successfully applied to new complex environments.
APPLD is verified on two robots running different navigation systems in different environments.
Experimental results show that APPLD can outperform navigation systems with the default and expert-tuned parameters, and even the human demonstrator themselves.
- Score: 48.63930323392909
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing autonomous robot navigation systems allow robots to move from one
point to another in a collision-free manner. However, when facing new
environments, these systems generally require re-tuning by expert roboticists
with a good understanding of the inner workings of the navigation system. In
contrast, even users who are unversed in the details of robot navigation
algorithms can generate desirable navigation behavior in new environments via
teleoperation. In this paper, we introduce APPLD, Adaptive Planner Parameter
Learning from Demonstration, that allows existing navigation systems to be
successfully applied to new complex environments, given only a human
teleoperated demonstration of desirable navigation. APPLD is verified on two
robots running different navigation systems in different environments.
Experimental results show that APPLD can outperform navigation systems with the
default and expert-tuned parameters, and even the human demonstrator
themselves.
Related papers
- Hierarchical end-to-end autonomous navigation through few-shot waypoint detection [0.0]
Human navigation is facilitated through the association of actions with landmarks.
Current autonomous navigation schemes rely on accurate positioning devices and algorithms as well as extensive streams of sensory data collected from the environment.
We propose a hierarchical end-to-end meta-learning scheme that enables a mobile robot to navigate in a previously unknown environment.
arXiv Detail & Related papers (2024-09-23T00:03:39Z) - Aligning Robot Navigation Behaviors with Human Intentions and Preferences [2.9914612342004503]
This dissertation aims to answer the question: "How can we use machine learning methods to align the navigational behaviors of autonomous mobile robots with human intentions and preferences?"
First, this dissertation introduces a new approach to learning navigation behaviors by imitating human-provided demonstrations of the intended navigation task.
Second, this dissertation introduces two algorithms to enhance terrain-aware off-road navigation for mobile robots by learning visual terrain awareness in a self-supervised manner.
arXiv Detail & Related papers (2024-09-16T03:45:00Z) - ETPNav: Evolving Topological Planning for Vision-Language Navigation in
Continuous Environments [56.194988818341976]
Vision-language navigation is a task that requires an agent to follow instructions to navigate in environments.
We propose ETPNav, which focuses on two critical skills: 1) the capability to abstract environments and generate long-range navigation plans, and 2) the ability of obstacle-avoiding control in continuous environments.
ETPNav yields more than 10% and 20% improvements over prior state-of-the-art on R2R-CE and RxR-CE datasets.
arXiv Detail & Related papers (2023-04-06T13:07:17Z) - Learning Social Navigation from Demonstrations with Conditional Neural
Processes [2.627046865670577]
This paper presents a data-driven navigation architecture that uses Conditional Neural Processes to learn global and local controllers of the mobile robot from observations.
Our results demonstrate that the proposed framework can successfully carry out navigation tasks regarding social norms in the data.
arXiv Detail & Related papers (2022-10-07T14:37:56Z) - GNM: A General Navigation Model to Drive Any Robot [67.40225397212717]
General goal-conditioned model for vision-based navigation can be trained on data obtained from many distinct but structurally similar robots.
We analyze the necessary design decisions for effective data sharing across robots.
We deploy the trained GNM on a range of new robots, including an under quadrotor.
arXiv Detail & Related papers (2022-10-07T07:26:41Z) - Socially Compliant Navigation Dataset (SCAND): A Large-Scale Dataset of
Demonstrations for Social Navigation [92.66286342108934]
Social navigation is the capability of an autonomous agent, such as a robot, to navigate in a'socially compliant' manner in the presence of other intelligent agents such as humans.
Our dataset contains 8.7 hours, 138 trajectories, 25 miles of socially compliant, human teleoperated driving demonstrations.
arXiv Detail & Related papers (2022-03-28T19:09:11Z) - ReLMM: Practical RL for Learning Mobile Manipulation Skills Using Only
Onboard Sensors [64.2809875343854]
We study how robots can autonomously learn skills that require a combination of navigation and grasping.
Our system, ReLMM, can learn continuously on a real-world platform without any environment instrumentation.
After a grasp curriculum training phase, ReLMM can learn navigation and grasping together fully automatically, in around 40 hours of real-world training.
arXiv Detail & Related papers (2021-07-28T17:59:41Z) - LaND: Learning to Navigate from Disengagements [158.6392333480079]
We present a reinforcement learning approach for learning to navigate from disengagements, or LaND.
LaND learns a neural network model that predicts which actions lead to disengagements given the current sensory observation, and then at test time plans and executes actions that avoid disengagements.
Our results demonstrate LaND can successfully learn to navigate in diverse, real world sidewalk environments, outperforming both imitation learning and reinforcement learning approaches.
arXiv Detail & Related papers (2020-10-09T17:21:42Z) - Robot Perception enables Complex Navigation Behavior via Self-Supervised
Learning [23.54696982881734]
We propose an approach to unify successful robot perception systems for active target-driven navigation tasks via reinforcement learning (RL)
Our method temporally incorporates compact motion and visual perception data, directly obtained using self-supervision from a single image sequence.
We demonstrate our approach on two real-world driving dataset, KITTI and Oxford RobotCar, using the new interactive CityLearn framework.
arXiv Detail & Related papers (2020-06-16T07:45:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.