Scenario-Based Hierarchical Reinforcement Learning for Automated Driving Decision Making
- URL: http://arxiv.org/abs/2506.23023v1
- Date: Sat, 28 Jun 2025 21:55:59 GMT
- Title: Scenario-Based Hierarchical Reinforcement Learning for Automated Driving Decision Making
- Authors: M. Youssef Abdelhamid, Lennart Vater, Zlatan Ajanovic,
- Abstract summary: Reinforcement Learning approaches can learn comprehensive decision policies directly from experience.<n>Current approaches fail to achieve generalizability for more complex driving tasks and lack learning efficiency.<n>We present Scenario-based Automated Driving Reinforcement Learning (SAD-RL), the first framework that integrates Reinforcement Learning (RL) of hierarchical policy in a scenario-based environment.
- Score: 0.27309692684728615
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Developing decision-making algorithms for highly automated driving systems remains challenging, since these systems have to operate safely in an open and complex environments. Reinforcement Learning (RL) approaches can learn comprehensive decision policies directly from experience and already show promising results in simple driving tasks. However, current approaches fail to achieve generalizability for more complex driving tasks and lack learning efficiency. Therefore, we present Scenario-based Automated Driving Reinforcement Learning (SAD-RL), the first framework that integrates Reinforcement Learning (RL) of hierarchical policy in a scenario-based environment. A high-level policy selects maneuver templates that are evaluated and executed by a low-level control logic. The scenario-based environment allows to control the training experience for the agent and to explicitly introduce challenging, but rate situations into the training process. Our experiments show that an agent trained using the SAD-RL framework can achieve safe behaviour in easy as well as challenging situations efficiently. Our ablation studies confirmed that both HRL and scenario diversity are essential for achieving these results.
Related papers
- Automatic Curriculum Learning for Driving Scenarios: Towards Robust and Efficient Reinforcement Learning [11.602831593017427]
This paper addresses the challenges of training end-to-end autonomous driving agents using Reinforcement Learning (RL)<n>RL agents are typically trained in a fixed set of scenarios and nominal behavior of surrounding road users in simulations.<n>We propose an automatic curriculum learning framework that dynamically generates driving scenarios with adaptive complexity based on the agent's evolving capabilities.
arXiv Detail & Related papers (2025-05-13T06:26:57Z) - TeLL-Drive: Enhancing Autonomous Driving with Teacher LLM-Guided Deep Reinforcement Learning [61.33599727106222]
TeLL-Drive is a hybrid framework that integrates a Teacher LLM to guide an attention-based Student DRL policy.<n>A self-attention mechanism then fuses these strategies with the DRL agent's exploration, accelerating policy convergence and boosting robustness.
arXiv Detail & Related papers (2025-02-03T14:22:03Z) - Aquatic Navigation: A Challenging Benchmark for Deep Reinforcement Learning [53.3760591018817]
We propose a new benchmarking environment for aquatic navigation using recent advances in the integration between game engines and Deep Reinforcement Learning.
Specifically, we focus on PPO, one of the most widely accepted algorithms, and we propose advanced training techniques.
Our empirical evaluation shows that a well-designed combination of these ingredients can achieve promising results.
arXiv Detail & Related papers (2024-05-30T23:20:23Z) - RACER: Epistemic Risk-Sensitive RL Enables Fast Driving with Fewer Crashes [57.319845580050924]
We propose a reinforcement learning framework that combines risk-sensitive control with an adaptive action space curriculum.
We show that our algorithm is capable of learning high-speed policies for a real-world off-road driving task.
arXiv Detail & Related papers (2024-05-07T23:32:36Z) - End-to-end Lidar-Driven Reinforcement Learning for Autonomous Racing [0.0]
Reinforcement Learning (RL) has emerged as a transformative approach in the domains of automation and robotics.
This study develops and trains an RL agent to navigate a racing environment solely using feedforward raw lidar and velocity data.
The agent's performance is then experimentally evaluated in a real-world racing scenario.
arXiv Detail & Related papers (2023-09-01T07:03:05Z) - Towards Safe Autonomous Driving Policies using a Neuro-Symbolic Deep
Reinforcement Learning Approach [6.961253535504979]
This paper introduces a novel neuro-symbolic model-free DRL approach, called DRL with Symbolic Logics (DRLSL)
It combines the strengths of DRL (learning from experience) and symbolic first-order logics (knowledge-driven reasoning) to enable safe learning in real-time interactions of autonomous driving within real environments.
We have implemented the DRLSL framework in autonomous driving using the highD dataset and demonstrated that our method successfully avoids unsafe actions during both the training and testing phases.
arXiv Detail & Related papers (2023-07-03T19:43:21Z) - Safety Correction from Baseline: Towards the Risk-aware Policy in
Robotics via Dual-agent Reinforcement Learning [64.11013095004786]
We propose a dual-agent safe reinforcement learning strategy consisting of a baseline and a safe agent.
Such a decoupled framework enables high flexibility, data efficiency and risk-awareness for RL-based control.
The proposed method outperforms the state-of-the-art safe RL algorithms on difficult robot locomotion and manipulation tasks.
arXiv Detail & Related papers (2022-12-14T03:11:25Z) - Evaluating Model-free Reinforcement Learning toward Safety-critical
Tasks [70.76757529955577]
This paper revisits prior work in this scope from the perspective of state-wise safe RL.
We propose Unrolling Safety Layer (USL), a joint method that combines safety optimization and safety projection.
To facilitate further research in this area, we reproduce related algorithms in a unified pipeline and incorporate them into SafeRL-Kit.
arXiv Detail & Related papers (2022-12-12T06:30:17Z) - High-level Decisions from a Safe Maneuver Catalog with Reinforcement
Learning for Safe and Cooperative Automated Merging [5.732271870257913]
We propose an efficient RL-based decision-making pipeline for safe and cooperative automated driving in merging scenarios.
The proposed RL agent can efficiently identify cooperative drivers from their vehicle state history and generate interactive maneuvers.
arXiv Detail & Related papers (2021-07-15T15:49:53Z) - Hierarchical Program-Triggered Reinforcement Learning Agents For
Automated Driving [5.404179497338455]
Recent advances in Reinforcement Learning (RL) combined with Deep Learning (DL) have demonstrated impressive performance in complex tasks, including autonomous driving.
We propose HPRL - Hierarchical Program-triggered Reinforcement Learning, which uses a hierarchy consisting of a structured program along with multiple RL agents, each trained to perform a relatively simple task.
The focus of verification shifts to the master program under simple guarantees from the RL agents, leading to a significantly more interpretable and verifiable implementation as compared to a complex RL agent.
arXiv Detail & Related papers (2021-03-25T14:19:54Z) - Cautious Adaptation For Reinforcement Learning in Safety-Critical
Settings [129.80279257258098]
Reinforcement learning (RL) in real-world safety-critical target settings like urban driving is hazardous.
We propose a "safety-critical adaptation" task setting: an agent first trains in non-safety-critical "source" environments.
We propose a solution approach, CARL, that builds on the intuition that prior experience in diverse environments equips an agent to estimate risk.
arXiv Detail & Related papers (2020-08-15T01:40:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.