Robot Navigation with Reinforcement Learned Path Generation and
Fine-Tuned Motion Control
- URL: http://arxiv.org/abs/2210.10639v1
- Date: Wed, 19 Oct 2022 15:10:52 GMT
- Title: Robot Navigation with Reinforcement Learned Path Generation and
Fine-Tuned Motion Control
- Authors: Longyuan Zhang, Ziyue Hou, Ji Wang, Ziang Liu and Wei Li
- Abstract summary: We propose a novel reinforcement learning based path generation (RL-PG) approach for mobile robot navigation without a prior exploration of an unknown environment.
We deploy our model on both simulation and physical platforms and demonstrate our model performs robot navigation effectively and safely.
- Score: 5.187605914580086
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we propose a novel reinforcement learning (RL) based path
generation (RL-PG) approach for mobile robot navigation without a prior
exploration of an unknown environment. Multiple predictive path points are
dynamically generated by a deep Markov model optimized using RL approach for
robot to track. To ensure the safety when tracking the predictive points, the
robot's motion is fine-tuned by a motion fine-tuning module. Such an approach,
using the deep Markov model with RL algorithm for planning, focuses on the
relationship between adjacent path points. We analyze the benefits that our
proposed approach are more effective and are with higher success rate than
RL-Based approach DWA-RL and a traditional navigation approach APF. We deploy
our model on both simulation and physical platforms and demonstrate our model
performs robot navigation effectively and safely.
Related papers
- Navigating the Human Maze: Real-Time Robot Pathfinding with Generative Imitation Learning [0.0]
We introduce goal-conditioned autoregressive models to generate crowd behaviors, capturing intricate interactions among individuals.
The model processes potential robot trajectory samples and predicts the reactions of surrounding individuals, enabling proactive robotic navigation in complex scenarios.
arXiv Detail & Related papers (2024-08-07T14:32:41Z) - Research on Autonomous Robots Navigation based on Reinforcement Learning [13.559881645869632]
We use the Deep Q Network (DQN) and Proximal Policy Optimization (PPO) models to optimize the path planning and decision-making process.
We have verified the effectiveness and robustness of these models in various complex scenarios.
arXiv Detail & Related papers (2024-07-02T00:44:06Z) - Deep Reinforcement Learning with Enhanced PPO for Safe Mobile Robot Navigation [0.6554326244334868]
This study investigates the application of deep reinforcement learning to train a mobile robot for autonomous navigation in a complex environment.
The robot utilizes LiDAR sensor data and a deep neural network to generate control signals guiding it toward a specified target while avoiding obstacles.
arXiv Detail & Related papers (2024-05-25T15:08:36Z) - NoMaD: Goal Masked Diffusion Policies for Navigation and Exploration [57.15811390835294]
This paper describes how we can train a single unified diffusion policy to handle both goal-directed navigation and goal-agnostic exploration.
We show that this unified policy results in better overall performance when navigating to visually indicated goals in novel environments.
Our experiments, conducted on a real-world mobile robot platform, show effective navigation in unseen environments in comparison with five alternative methods.
arXiv Detail & Related papers (2023-10-11T21:07:14Z) - Deterministic and Stochastic Analysis of Deep Reinforcement Learning for
Low Dimensional Sensing-based Navigation of Mobile Robots [0.41562334038629606]
This paper presents a comparative analysis of two Deep-RL techniques - Deep Deterministic Policy Gradients (DDPG) and Soft Actor-Critic (SAC)
We aim to contribute by showing how the neural network architecture influences the learning itself, presenting quantitative results based on the time and distance of aerial mobile robots for each approach.
arXiv Detail & Related papers (2022-09-13T22:28:26Z) - Constrained Reinforcement Learning for Robotics via Scenario-Based
Programming [64.07167316957533]
It is crucial to optimize the performance of DRL-based agents while providing guarantees about their behavior.
This paper presents a novel technique for incorporating domain-expert knowledge into a constrained DRL training loop.
Our experiments demonstrate that using our approach to leverage expert knowledge dramatically improves the safety and the performance of the agent.
arXiv Detail & Related papers (2022-06-20T07:19:38Z) - Verifying Learning-Based Robotic Navigation Systems [61.01217374879221]
We show how modern verification engines can be used for effective model selection.
Specifically, we use verification to detect and rule out policies that may demonstrate suboptimal behavior.
Our work is the first to demonstrate the use of verification backends for recognizing suboptimal DRL policies in real-world robots.
arXiv Detail & Related papers (2022-05-26T17:56:43Z) - Nonprehensile Riemannian Motion Predictive Control [57.295751294224765]
We introduce a novel Real-to-Sim reward analysis technique to reliably imagine and predict the outcome of taking possible actions for a real robotic platform.
We produce a closed-loop controller to reactively push objects in a continuous action space.
We observe that RMPC is robust in cluttered as well as occluded environments and outperforms the baselines.
arXiv Detail & Related papers (2021-11-15T18:50:04Z) - SABER: Data-Driven Motion Planner for Autonomously Navigating
Heterogeneous Robots [112.2491765424719]
We present an end-to-end online motion planning framework that uses a data-driven approach to navigate a heterogeneous robot team towards a global goal.
We use model predictive control (SMPC) to calculate control inputs that satisfy robot dynamics, and consider uncertainty during obstacle avoidance with chance constraints.
recurrent neural networks are used to provide a quick estimate of future state uncertainty considered in the SMPC finite-time horizon solution.
A Deep Q-learning agent is employed to serve as a high-level path planner, providing the SMPC with target positions that move the robots towards a desired global goal.
arXiv Detail & Related papers (2021-08-03T02:56:21Z) - LBGP: Learning Based Goal Planning for Autonomous Following in Front [16.13120109400351]
This paper investigates a hybrid solution which combines deep reinforcement learning (RL) and classical trajectory planning.
An autonomous robot aims to stay ahead of a person as the person freely walks around.
Our system outperforms the state-of-the-art in following ahead and is more reliable compared to end-to-end alternatives in both the simulation and real world experiments.
arXiv Detail & Related papers (2020-11-05T22:29:30Z) - Path Planning Followed by Kinodynamic Smoothing for Multirotor Aerial
Vehicles (MAVs) [61.94975011711275]
We propose a geometrically based motion planning technique textquotedblleft RRT*textquotedblright; for this purpose.
In the proposed technique, we modified original RRT* introducing an adaptive search space and a steering function.
We have tested the proposed technique in various simulated environments.
arXiv Detail & Related papers (2020-08-29T09:55:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.