Avoidance Navigation Based on Offline Pre-Training Reinforcement
Learning
- URL: http://arxiv.org/abs/2308.01551v1
- Date: Thu, 3 Aug 2023 06:19:46 GMT
- Title: Avoidance Navigation Based on Offline Pre-Training Reinforcement
Learning
- Authors: Yang Wenkai Ji Ruihang Zhang Yuxiang Lei Hao and Zhao Zijie
- Abstract summary: This paper presents a Pre-Training Deep Reinforcement Learning(DRL) for avoidance navigation without map for mobile robots.
The efficient offline training strategy is proposed to speed up the inefficient random explorations in early stage.
It was demonstrated that our DRL model have universal general capacity in different environment.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper presents a Pre-Training Deep Reinforcement Learning(DRL) for
avoidance navigation without map for mobile robots which map raw sensor data to
control variable and navigate in an unknown environment. The efficient offline
training strategy is proposed to speed up the inefficient random explorations
in early stage and we also collect a universal dataset including expert
experience for offline training, which is of some significance for other
navigation training work. The pre-training and prioritized expert experience
are proposed to reduce 80\% training time and has been verified to improve the
2 times reward of DRL. The advanced simulation gazebo with real physical
modelling and dynamic equations reduce the gap between sim-to-real. We train
our model a corridor environment, and evaluate the model in different
environment getting the same effect. Compared to traditional method navigation,
we can confirm the trained model can be directly applied into different
scenarios and have the ability to no collision navigate. It was demonstrated
that our DRL model have universal general capacity in different environment.
Related papers
- Traffic expertise meets residual RL: Knowledge-informed model-based residual reinforcement learning for CAV trajectory control [1.5361702135159845]
This paper introduces a knowledge-informed model-based residual reinforcement learning framework.
It integrates traffic expert knowledge into a virtual environment model, employing the Intelligent Driver Model (IDM) for basic dynamics and neural networks for residual dynamics.
We propose a novel strategy that combines traditional control methods with residual RL, facilitating efficient learning and policy optimization without the need to learn from scratch.
arXiv Detail & Related papers (2024-08-30T16:16:57Z) - Aquatic Navigation: A Challenging Benchmark for Deep Reinforcement Learning [53.3760591018817]
We propose a new benchmarking environment for aquatic navigation using recent advances in the integration between game engines and Deep Reinforcement Learning.
Specifically, we focus on PPO, one of the most widely accepted algorithms, and we propose advanced training techniques.
Our empirical evaluation shows that a well-designed combination of these ingredients can achieve promising results.
arXiv Detail & Related papers (2024-05-30T23:20:23Z) - Rethinking Closed-loop Training for Autonomous Driving [82.61418945804544]
We present the first empirical study which analyzes the effects of different training benchmark designs on the success of learning agents.
We propose trajectory value learning (TRAVL), an RL-based driving agent that performs planning with multistep look-ahead.
Our experiments show that TRAVL can learn much faster and produce safer maneuvers compared to all the baselines.
arXiv Detail & Related papers (2023-06-27T17:58:39Z) - FastRLAP: A System for Learning High-Speed Driving via Deep RL and
Autonomous Practicing [71.76084256567599]
We present a system that enables an autonomous small-scale RC car to drive aggressively from visual observations using reinforcement learning (RL)
Our system, FastRLAP (faster lap), trains autonomously in the real world, without human interventions, and without requiring any simulation or expert demonstrations.
The resulting policies exhibit emergent aggressive driving skills, such as timing braking and acceleration around turns and avoiding areas which impede the robot's motion, approaching the performance of a human driver using a similar first-person interface over the course of training.
arXiv Detail & Related papers (2023-04-19T17:33:47Z) - Offline Reinforcement Learning for Visual Navigation [66.88830049694457]
ReViND is the first offline RL system for robotic navigation that can leverage previously collected data to optimize user-specified reward functions in the real-world.
We show that ReViND can navigate to distant goals using only offline training from this dataset, and exhibit behaviors that qualitatively differ based on the user-specified reward function.
arXiv Detail & Related papers (2022-12-16T02:23:50Z) - Vessel-following model for inland waterways based on deep reinforcement
learning [0.0]
This study aims at investigating the feasibility of RL-based vehicle-following for complex vehicle dynamics and strong environmental disturbances.
We developed an inland waterways vessel-following model based on realistic vessel dynamics.
Our model demonstrated safe and comfortable driving in all scenarios, proving excellent generalization abilities.
arXiv Detail & Related papers (2022-07-07T12:19:03Z) - Visual-Language Navigation Pretraining via Prompt-based Environmental
Self-exploration [83.96729205383501]
We introduce prompt-based learning to achieve fast adaptation for language embeddings.
Our model can adapt to diverse vision-language navigation tasks, including VLN and REVERIE.
arXiv Detail & Related papers (2022-03-08T11:01:24Z) - Multitask Adaptation by Retrospective Exploration with Learned World
Models [77.34726150561087]
We propose a meta-learned addressing model called RAMa that provides training samples for the MBRL agent taken from task-agnostic storage.
The model is trained to maximize the expected agent's performance by selecting promising trajectories solving prior tasks from the storage.
arXiv Detail & Related papers (2021-10-25T20:02:57Z) - Human-Aware Robot Navigation via Reinforcement Learning with Hindsight
Experience Replay and Curriculum Learning [28.045441768064215]
Reinforcement learning approaches have shown superior ability in solving sequential decision making problems.
In this work, we consider the task of training an RL agent without employing the demonstration data.
We propose to incorporate the hindsight experience replay (HER) and curriculum learning (CL) techniques with RL to efficiently learn the optimal navigation policy in the dense crowd.
arXiv Detail & Related papers (2021-10-09T13:18:11Z) - An A* Curriculum Approach to Reinforcement Learning for RGBD Indoor
Robot Navigation [6.660458629649825]
Recently released photo-realistic simulators such as Habitat allow for the training of networks that output control actions directly from perception.
Our paper tries to overcome this problem by separating the training of the perception and control neural nets and increasing the path complexity gradually.
arXiv Detail & Related papers (2021-01-05T20:35:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.