How to Brake? Ethical Emergency Braking with Deep Reinforcement Learning
- URL: http://arxiv.org/abs/2512.10698v1
- Date: Thu, 11 Dec 2025 14:40:33 GMT
- Title: How to Brake? Ethical Emergency Braking with Deep Reinforcement Learning
- Authors: Jianbo Wang, Galina Sidorenko, Johan Thunberg,
- Abstract summary: We investigate how Deep Reinforcement Learning can be leveraged to improve safety in multi-vehicle-following scenarios involving emergency braking.<n>Specifically, we investigate how DRL with vehicle-to-vehicle communication can be used to ethically select an emergency breaking profile.<n>We provide a hybrid approach that combines DRL with a previously published method based on analytical expressions for selecting optimal constant deceleration.
- Score: 3.906196377005682
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Connected and automated vehicles (CAVs) have the potential to enhance driving safety, for example by enabling safe vehicle following and more efficient traffic scheduling. For such future deployments, safety requirements should be addressed, where the primary such are avoidance of vehicle collisions and substantial mitigating of harm when collisions are unavoidable. However, conservative worst-case-based control strategies come at the price of reduced flexibility and may compromise overall performance. In light of this, we investigate how Deep Reinforcement Learning (DRL) can be leveraged to improve safety in multi-vehicle-following scenarios involving emergency braking. Specifically, we investigate how DRL with vehicle-to-vehicle communication can be used to ethically select an emergency breaking profile in scenarios where overall, or collective, three-vehicle harm reduction or collision avoidance shall be obtained instead of single-vehicle such. As an algorithm, we provide a hybrid approach that combines DRL with a previously published method based on analytical expressions for selecting optimal constant deceleration. By combining DRL with the previous method, the proposed hybrid approach increases the reliability compared to standalone DRL, while achieving superior performance in terms of overall harm reduction and collision avoidance.
Related papers
- Rethinking Safety in LLM Fine-tuning: An Optimization Perspective [56.31306558218838]
We show that poor optimization choices, rather than inherent trade-offs, often cause safety problems, measured as harmful responses to adversarial prompts.<n>We propose a simple exponential moving average (EMA) momentum technique in parameter space that preserves safety performance.<n>Our experiments on the Llama families across multiple datasets demonstrate that safety problems can largely be avoided without specialized interventions.
arXiv Detail & Related papers (2025-08-17T23:46:36Z) - Advanced Longitudinal Control and Collision Avoidance for High-Risk Edge Cases in Autonomous Driving [0.0]
We propose a novel longitudinal control and collision avoidance algorithm that integrates adaptive cruising with emergency braking.<n>In simulated high risk scenarios, the algorithm effectively prevents potential pile up collisions, even in situations involving heavy duty vehicles.<n>In typical highway scenarios where three vehicles decelerate, the proposed DRL approach achieves a 99% success rate far surpassing the standard Federal Highway Administration speed concepts guide.
arXiv Detail & Related papers (2025-04-26T14:17:06Z) - SECRM-2D: RL-Based Efficient and Comfortable Route-Following Autonomous Driving with Analytic Safety Guarantees [5.156059061769101]
SECRM-2D is an RL autonomous driving controller that balances optimization of efficiency and comfort and follows a fixed route.
We evaluate SECRM-2D against several learning and non-learning baselines in simulated test scenarios.
arXiv Detail & Related papers (2024-07-23T21:54:39Z) - RACER: Epistemic Risk-Sensitive RL Enables Fast Driving with Fewer Crashes [57.319845580050924]
We propose a reinforcement learning framework that combines risk-sensitive control with an adaptive action space curriculum.
We show that our algorithm is capable of learning high-speed policies for a real-world off-road driving task.
arXiv Detail & Related papers (2024-05-07T23:32:36Z) - Deep Reinforcement Learning for Advanced Longitudinal Control and Collision Avoidance in High-Risk Driving Scenarios [0.0]
This study introduces a novel deep reinforcement learning based algorithm for longitudinal control and collision avoidance.<n>Its implementation in simulated high risk scenarios, which involve emergency braking in dense traffic where traditional systems typically fail, has demonstrated the algorithm ability to prevent potential pile up collisions.
arXiv Detail & Related papers (2024-04-29T19:58:34Z) - CAT: Closed-loop Adversarial Training for Safe End-to-End Driving [54.60865656161679]
Adversarial Training (CAT) is a framework for safe end-to-end driving in autonomous vehicles.
Cat aims to continuously improve the safety of driving agents by training the agent on safety-critical scenarios.
Cat can effectively generate adversarial scenarios countering the agent being trained.
arXiv Detail & Related papers (2023-10-19T02:49:31Z) - A Multiplicative Value Function for Safe and Efficient Reinforcement
Learning [131.96501469927733]
We propose a safe model-free RL algorithm with a novel multiplicative value function consisting of a safety critic and a reward critic.
The safety critic predicts the probability of constraint violation and discounts the reward critic that only estimates constraint-free returns.
We evaluate our method in four safety-focused environments, including classical RL benchmarks augmented with safety constraints and robot navigation tasks with images and raw Lidar scans as observations.
arXiv Detail & Related papers (2023-03-07T18:29:15Z) - Safe Reinforcement Learning for an Energy-Efficient Driver Assistance
System [1.8899300124593645]
Reinforcement learning (RL)-based driver assistance systems seek to improve fuel consumption via continual improvement of powertrain control actions.
In this paper, an exponential control barrier function (ECBF) is derived and utilized to filter unsafe actions proposed by an RL-based driver assistance system.
The proposed safe-RL scheme is trained and evaluated in car following scenarios where it is shown that it effectively avoids collision both during training and evaluation.
arXiv Detail & Related papers (2023-01-03T00:25:00Z) - Safety Correction from Baseline: Towards the Risk-aware Policy in
Robotics via Dual-agent Reinforcement Learning [64.11013095004786]
We propose a dual-agent safe reinforcement learning strategy consisting of a baseline and a safe agent.
Such a decoupled framework enables high flexibility, data efficiency and risk-awareness for RL-based control.
The proposed method outperforms the state-of-the-art safe RL algorithms on difficult robot locomotion and manipulation tasks.
arXiv Detail & Related papers (2022-12-14T03:11:25Z) - Enhancing Safe Exploration Using Safety State Augmentation [71.00929878212382]
We tackle the problem of safe exploration in model-free reinforcement learning.
We derive policies for scheduling the safety budget during training.
We show that Simmer can stabilize training and improve the performance of safe RL with average constraints.
arXiv Detail & Related papers (2022-06-06T15:23:07Z) - Self-Awareness Safety of Deep Reinforcement Learning in Road Traffic
Junction Driving [20.85562165500152]
In a road traffic junction scenario, the vehicle typically receives partial observations from the transportation environment.
In this study, we evaluated the safety performance of three baseline DRL models (DQN, A2C, and PPO)
Our proposed self-awareness attention-DQN can significantly improve the safety performance in intersection and roundabout scenarios.
arXiv Detail & Related papers (2022-01-20T11:21:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.