Safety Aware Autonomous Path Planning Using Model Predictive
Reinforcement Learning for Inland Waterways
- URL: http://arxiv.org/abs/2311.09878v1
- Date: Thu, 16 Nov 2023 13:12:58 GMT
- Title: Safety Aware Autonomous Path Planning Using Model Predictive
Reinforcement Learning for Inland Waterways
- Authors: Astrid Vanneste, Simon Vanneste, Olivier Vasseur, Robin Janssens,
Mattias Billast, Ali Anwar, Kevin Mets, Tom De Schepper, Siegfried Mercelis,
Peter Hellinckx
- Abstract summary: We propose a novel path planning approach based on reinforcement learning called Model Predictive Reinforcement Learning (MPRL)
MPRL calculates a series of waypoints for the vessel to follow.
We demonstrate our approach on two scenarios and compare the resulting path with path planning using a Frenet frame and path planning based on a proximal policy optimization (PPO) agent.
- Score: 2.0623470039259946
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, interest in autonomous shipping in urban waterways has
increased significantly due to the trend of keeping cars and trucks out of city
centers. Classical approaches such as Frenet frame based planning and potential
field navigation often require tuning of many configuration parameters and
sometimes even require a different configuration depending on the situation. In
this paper, we propose a novel path planning approach based on reinforcement
learning called Model Predictive Reinforcement Learning (MPRL). MPRL calculates
a series of waypoints for the vessel to follow. The environment is represented
as an occupancy grid map, allowing us to deal with any shape of waterway and
any number and shape of obstacles. We demonstrate our approach on two scenarios
and compare the resulting path with path planning using a Frenet frame and path
planning based on a proximal policy optimization (PPO) agent. Our results show
that MPRL outperforms both baselines in both test scenarios. The PPO based
approach was not able to reach the goal in either scenario while the Frenet
frame approach failed in the scenario consisting of a corner with obstacles.
MPRL was able to safely (collision free) navigate to the goal in both of the
test scenarios.
Related papers
- WROOM: An Autonomous Driving Approach for Off-Road Navigation [17.74237088460657]
We design an end-to-end reinforcement learning (RL) system for an autonomous vehicle in off-road environments.
We warm-start the agent by imitating a rule-based controller and utilize Proximal Policy Optimization (PPO) to improve the policy.
We propose a novel simulation environment to replicate off-road driving scenarios and deploy our proposed approach on a real buggy RC car.
arXiv Detail & Related papers (2024-04-12T23:55:59Z) - LLM-Assist: Enhancing Closed-Loop Planning with Language-Based Reasoning [65.86754998249224]
We develop a novel hybrid planner that leverages a conventional rule-based planner in conjunction with an LLM-based planner.
Our approach navigates complex scenarios which existing planners struggle with, produces well-reasoned outputs while also remaining grounded through working alongside the rule-based approach.
arXiv Detail & Related papers (2023-12-30T02:53:45Z) - Integration of Reinforcement Learning Based Behavior Planning With
Sampling Based Motion Planning for Automated Driving [0.5801044612920815]
We propose a method to employ a trained deep reinforcement learning policy for dedicated high-level behavior planning.
To the best of our knowledge, this work is the first to apply deep reinforcement learning in this manner.
arXiv Detail & Related papers (2023-04-17T13:49:55Z) - Optimizing Trajectories for Highway Driving with Offline Reinforcement
Learning [11.970409518725491]
We propose a Reinforcement Learning-based approach to autonomous driving.
We compare the performance of our agent against four other highway driving agents.
We demonstrate that our offline trained agent, with randomly collected data, learns to drive smoothly, achieving as close as possible to the desired velocity, while outperforming the other agents.
arXiv Detail & Related papers (2022-03-21T13:13:08Z) - Benchmarking Safe Deep Reinforcement Learning in Aquatic Navigation [78.17108227614928]
We propose a benchmark environment for Safe Reinforcement Learning focusing on aquatic navigation.
We consider a value-based and policy-gradient Deep Reinforcement Learning (DRL)
We also propose a verification strategy that checks the behavior of the trained models over a set of desired properties.
arXiv Detail & Related papers (2021-12-16T16:53:56Z) - Generating Useful Accident-Prone Driving Scenarios via a Learned Traffic
Prior [135.78858513845233]
STRIVE is a method to automatically generate challenging scenarios that cause a given planner to produce undesirable behavior, like collisions.
To maintain scenario plausibility, the key idea is to leverage a learned model of traffic motion in the form of a graph-based conditional VAE.
A subsequent optimization is used to find a "solution" to the scenario, ensuring it is useful to improve the given planner.
arXiv Detail & Related papers (2021-12-09T18:03:27Z) - Motion Planning for Autonomous Vehicles in the Presence of Uncertainty
Using Reinforcement Learning [0.0]
Motion planning under uncertainty is one of the main challenges in developing autonomous driving vehicles.
We propose a reinforcement learning based solution to manage uncertainty by optimizing for the worst case outcome.
The proposed approach yields much better motion planning behavior compared to conventional RL algorithms and behaves comparably to humans driving style.
arXiv Detail & Related papers (2021-10-01T20:32:25Z) - Divide-and-Conquer for Lane-Aware Diverse Trajectory Prediction [71.97877759413272]
Trajectory prediction is a safety-critical tool for autonomous vehicles to plan and execute actions.
Recent methods have achieved strong performances using Multi-Choice Learning objectives like winner-takes-all (WTA) or best-of-many.
Our work addresses two key challenges in trajectory prediction, learning outputs, and better predictions by imposing constraints using driving knowledge.
arXiv Detail & Related papers (2021-04-16T17:58:56Z) - Path Planning Followed by Kinodynamic Smoothing for Multirotor Aerial
Vehicles (MAVs) [61.94975011711275]
We propose a geometrically based motion planning technique textquotedblleft RRT*textquotedblright; for this purpose.
In the proposed technique, we modified original RRT* introducing an adaptive search space and a steering function.
We have tested the proposed technique in various simulated environments.
arXiv Detail & Related papers (2020-08-29T09:55:49Z) - Reinforcement Learning for Low-Thrust Trajectory Design of
Interplanetary Missions [77.34726150561087]
This paper investigates the use of reinforcement learning for the robust design of interplanetary trajectories in presence of severe disturbances.
An open-source implementation of the state-of-the-art algorithm Proximal Policy Optimization is adopted.
The resulting Guidance and Control Network provides both a robust nominal trajectory and the associated closed-loop guidance law.
arXiv Detail & Related papers (2020-08-19T15:22:15Z) - Integrating Deep Reinforcement Learning with Model-based Path Planners
for Automated Driving [0.0]
We propose a hybrid approach for integrating a path planning pipe into a vision based DRL framework.
In summary, the DRL agent is trained to follow the path planner's waypoints as close as possible.
Experimental results show that the proposed method can plan its path and navigate between randomly chosen origin-destination points.
arXiv Detail & Related papers (2020-02-02T17:10:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.