Vessel-following model for inland waterways based on deep reinforcement
learning
- URL: http://arxiv.org/abs/2207.03257v1
- Date: Thu, 7 Jul 2022 12:19:03 GMT
- Title: Vessel-following model for inland waterways based on deep reinforcement
learning
- Authors: Fabian Hart, Ostap Okhrin, Martin Treiber
- Abstract summary: This study aims at investigating the feasibility of RL-based vehicle-following for complex vehicle dynamics and strong environmental disturbances.
We developed an inland waterways vessel-following model based on realistic vessel dynamics.
Our model demonstrated safe and comfortable driving in all scenarios, proving excellent generalization abilities.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: While deep reinforcement learning (RL) has been increasingly applied in
designing car-following models in the last years, this study aims at
investigating the feasibility of RL-based vehicle-following for complex vehicle
dynamics and strong environmental disturbances. As a use case, we developed an
inland waterways vessel-following model based on realistic vessel dynamics,
which considers environmental influences, such as varying stream velocity and
river profile. We extracted natural vessel behavior from anonymized AIS data to
formulate a reward function that reflects a realistic driving style next to
comfortable and safe navigation. Aiming at high generalization capabilities, we
propose an RL training environment that uses stochastic processes to model
leading trajectory and river dynamics. To validate the trained model, we
defined different scenarios that have not been seen in training, including
realistic vessel-following on the Middle Rhine. Our model demonstrated safe and
comfortable driving in all scenarios, proving excellent generalization
abilities. Furthermore, traffic oscillations could effectively be dampened by
deploying the trained model on a sequence of following vessels.
Related papers
- Evaluating Robustness of Reinforcement Learning Algorithms for Autonomous Shipping [2.9109581496560044]
This paper examines the robustness of benchmark deep reinforcement learning (RL) algorithms, implemented for inland waterway transport (IWT) within an autonomous shipping simulator.
We show that a model-free approach can achieve an adequate policy in the simulator, successfully navigating port environments never encountered during training.
arXiv Detail & Related papers (2024-11-07T17:55:07Z) - Model-Based Reinforcement Learning for Control of Strongly-Disturbed Unsteady Aerodynamic Flows [0.0]
We propose a model-based reinforcement learning (MBRL) approach by incorporating a novel reduced-order model as a surrogate for the full environment.
The robustness and generalizability of the model is demonstrated in two distinct flow environments.
We demonstrate that the policy learned in the reduced-order environment translates to an effective control strategy in the full CFD environment.
arXiv Detail & Related papers (2024-08-26T23:21:44Z) - Adversarial Safety-Critical Scenario Generation using Naturalistic Human Driving Priors [2.773055342671194]
We introduce a natural adversarial scenario generation solution using naturalistic human driving priors and reinforcement learning techniques.
Our findings demonstrate that the proposed model can generate realistic safety-critical test scenarios covering both naturalness and adversariality.
arXiv Detail & Related papers (2024-08-06T13:58:56Z) - Aquatic Navigation: A Challenging Benchmark for Deep Reinforcement Learning [53.3760591018817]
We propose a new benchmarking environment for aquatic navigation using recent advances in the integration between game engines and Deep Reinforcement Learning.
Specifically, we focus on PPO, one of the most widely accepted algorithms, and we propose advanced training techniques.
Our empirical evaluation shows that a well-designed combination of these ingredients can achieve promising results.
arXiv Detail & Related papers (2024-05-30T23:20:23Z) - RACER: Rational Artificial Intelligence Car-following-model Enhanced by
Reality [51.244807332133696]
This paper introduces RACER, a cutting-edge deep learning car-following model to predict Adaptive Cruise Control (ACC) driving behavior.
Unlike conventional models, RACER effectively integrates Rational Driving Constraints (RDCs), crucial tenets of actual driving.
RACER excels across key metrics, such as acceleration, velocity, and spacing, registering zero violations.
arXiv Detail & Related papers (2023-12-12T06:21:30Z) - Reinforcement Learning with Human Feedback for Realistic Traffic
Simulation [53.85002640149283]
Key element of effective simulation is the incorporation of realistic traffic models that align with human knowledge.
This study identifies two main challenges: capturing the nuances of human preferences on realism and the unification of diverse traffic simulation models.
arXiv Detail & Related papers (2023-09-01T19:29:53Z) - Avoidance Navigation Based on Offline Pre-Training Reinforcement
Learning [0.0]
This paper presents a Pre-Training Deep Reinforcement Learning(DRL) for avoidance navigation without map for mobile robots.
The efficient offline training strategy is proposed to speed up the inefficient random explorations in early stage.
It was demonstrated that our DRL model have universal general capacity in different environment.
arXiv Detail & Related papers (2023-08-03T06:19:46Z) - TrafficBots: Towards World Models for Autonomous Driving Simulation and
Motion Prediction [149.5716746789134]
We show data-driven traffic simulation can be formulated as a world model.
We present TrafficBots, a multi-agent policy built upon motion prediction and end-to-end driving.
Experiments on the open motion dataset show TrafficBots can simulate realistic multi-agent behaviors.
arXiv Detail & Related papers (2023-03-07T18:28:41Z) - Objective-aware Traffic Simulation via Inverse Reinforcement Learning [31.26257563160961]
We formulate traffic simulation as an inverse reinforcement learning problem.
We propose a parameter sharing adversarial inverse reinforcement learning model for dynamics-robust simulation learning.
Our proposed model is able to imitate a vehicle's trajectories in the real world while simultaneously recovering the reward function.
arXiv Detail & Related papers (2021-05-20T07:26:34Z) - Learning to drive from a world on rails [78.28647825246472]
We learn an interactive vision-based driving policy from pre-recorded driving logs via a model-based approach.
A forward model of the world supervises a driving policy that predicts the outcome of any potential driving trajectory.
Our method ranks first on the CARLA leaderboard, attaining a 25% higher driving score while using 40 times less data.
arXiv Detail & Related papers (2021-05-03T05:55:30Z) - Cautious Adaptation For Reinforcement Learning in Safety-Critical
Settings [129.80279257258098]
Reinforcement learning (RL) in real-world safety-critical target settings like urban driving is hazardous.
We propose a "safety-critical adaptation" task setting: an agent first trains in non-safety-critical "source" environments.
We propose a solution approach, CARL, that builds on the intuition that prior experience in diverse environments equips an agent to estimate risk.
arXiv Detail & Related papers (2020-08-15T01:40:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.