Safe Reinforcement Learning for an Energy-Efficient Driver Assistance
System
- URL: http://arxiv.org/abs/2301.00904v1
- Date: Tue, 3 Jan 2023 00:25:00 GMT
- Title: Safe Reinforcement Learning for an Energy-Efficient Driver Assistance
System
- Authors: Habtamu Hailemichael, Beshah Ayalew, Lindsey Kerbel, Andrej Ivanco,
Keith Loiselle
- Abstract summary: Reinforcement learning (RL)-based driver assistance systems seek to improve fuel consumption via continual improvement of powertrain control actions.
In this paper, an exponential control barrier function (ECBF) is derived and utilized to filter unsafe actions proposed by an RL-based driver assistance system.
The proposed safe-RL scheme is trained and evaluated in car following scenarios where it is shown that it effectively avoids collision both during training and evaluation.
- Score: 1.8899300124593645
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Reinforcement learning (RL)-based driver assistance systems seek to improve
fuel consumption via continual improvement of powertrain control actions
considering experiential data from the field. However, the need to explore
diverse experiences in order to learn optimal policies often limits the
application of RL techniques in safety-critical systems like vehicle control.
In this paper, an exponential control barrier function (ECBF) is derived and
utilized to filter unsafe actions proposed by an RL-based driver assistance
system. The RL agent freely explores and optimizes the performance objectives
while unsafe actions are projected to the closest actions in the safe domain.
The reward is structured so that driver's acceleration requests are met in a
manner that boosts fuel economy and doesn't compromise comfort. The optimal
gear and traction torque control actions that maximize the cumulative reward
are computed via the Maximum a Posteriori Policy Optimization (MPO) algorithm
configured for a hybrid action space. The proposed safe-RL scheme is trained
and evaluated in car following scenarios where it is shown that it effectively
avoids collision both during training and evaluation while delivering on the
expected fuel economy improvements for the driver assistance system.
Related papers
- CAT: Closed-loop Adversarial Training for Safe End-to-End Driving [54.60865656161679]
Adversarial Training (CAT) is a framework for safe end-to-end driving in autonomous vehicles.
Cat aims to continuously improve the safety of driving agents by training the agent on safety-critical scenarios.
Cat can effectively generate adversarial scenarios countering the agent being trained.
arXiv Detail & Related papers (2023-10-19T02:49:31Z) - Risk-Aware Reward Shaping of Reinforcement Learning Agents for
Autonomous Driving [6.613838702441967]
This paper investigates how to use risk-aware reward shaping to leverage the training and test performance of RL agents in autonomous driving.
We propose additional reshaped reward terms that encourage exploration and penalize risky driving behaviors.
arXiv Detail & Related papers (2023-06-05T20:10:36Z) - Driver Assistance Eco-driving and Transmission Control with Deep
Reinforcement Learning [2.064612766965483]
In this paper, a model-free deep reinforcement learning (RL) control agent is proposed for active Eco-driving assistance.
It trades-off fuel consumption against other driver-accommodation objectives, and learns optimal traction torque and transmission shifting policies from experience.
It shows superior performance in minimizing fuel consumption compared to a baseline controller that has full knowledge of fuel-efficiency tables.
arXiv Detail & Related papers (2022-12-15T02:52:07Z) - Safety Correction from Baseline: Towards the Risk-aware Policy in
Robotics via Dual-agent Reinforcement Learning [64.11013095004786]
We propose a dual-agent safe reinforcement learning strategy consisting of a baseline and a safe agent.
Such a decoupled framework enables high flexibility, data efficiency and risk-awareness for RL-based control.
The proposed method outperforms the state-of-the-art safe RL algorithms on difficult robot locomotion and manipulation tasks.
arXiv Detail & Related papers (2022-12-14T03:11:25Z) - Evaluating Model-free Reinforcement Learning toward Safety-critical
Tasks [70.76757529955577]
This paper revisits prior work in this scope from the perspective of state-wise safe RL.
We propose Unrolling Safety Layer (USL), a joint method that combines safety optimization and safety projection.
To facilitate further research in this area, we reproduce related algorithms in a unified pipeline and incorporate them into SafeRL-Kit.
arXiv Detail & Related papers (2022-12-12T06:30:17Z) - Unified Automatic Control of Vehicular Systems with Reinforcement
Learning [64.63619662693068]
This article contributes a streamlined methodology for vehicular microsimulation.
It discovers high performance control strategies with minimal manual design.
The study reveals numerous emergent behaviors resembling wave mitigation, traffic signaling, and ramp metering.
arXiv Detail & Related papers (2022-07-30T16:23:45Z) - Self-Awareness Safety of Deep Reinforcement Learning in Road Traffic
Junction Driving [20.85562165500152]
In a road traffic junction scenario, the vehicle typically receives partial observations from the transportation environment.
In this study, we evaluated the safety performance of three baseline DRL models (DQN, A2C, and PPO)
Our proposed self-awareness attention-DQN can significantly improve the safety performance in intersection and roundabout scenarios.
arXiv Detail & Related papers (2022-01-20T11:21:33Z) - Driving-Policy Adaptive Safeguard for Autonomous Vehicles Using
Reinforcement Learning [19.71676985220504]
This paper proposes a driving-policy adaptive safeguard (DPAS) design, including a collision avoidance strategy and an activation function.
The driving-policy adaptive activation function should dynamically assess current driving policy risk and kick in when an urgent threat is detected.
The results are calibrated by naturalistic driving data and show that the proposed safeguard reduces the collision rate significantly without introducing more interventions.
arXiv Detail & Related papers (2020-12-02T08:01:53Z) - Decision-making for Autonomous Vehicles on Highway: Deep Reinforcement
Learning with Continuous Action Horizon [14.059728921828938]
This paper utilizes the deep reinforcement learning (DRL) method to address the continuous-horizon decision-making problem on the highway.
The running objective of the ego automated vehicle is to execute an efficient and smooth policy without collision.
The PPO-DRL-based decision-making strategy is estimated from multiple perspectives, including the optimality, learning efficiency, and adaptability.
arXiv Detail & Related papers (2020-08-26T22:49:27Z) - Cautious Adaptation For Reinforcement Learning in Safety-Critical
Settings [129.80279257258098]
Reinforcement learning (RL) in real-world safety-critical target settings like urban driving is hazardous.
We propose a "safety-critical adaptation" task setting: an agent first trains in non-safety-critical "source" environments.
We propose a solution approach, CARL, that builds on the intuition that prior experience in diverse environments equips an agent to estimate risk.
arXiv Detail & Related papers (2020-08-15T01:40:59Z) - Guided Constrained Policy Optimization for Dynamic Quadrupedal Robot
Locomotion [78.46388769788405]
We introduce guided constrained policy optimization (GCPO), an RL framework based upon our implementation of constrained policy optimization (CPPO)
We show that guided constrained RL offers faster convergence close to the desired optimum resulting in an optimal, yet physically feasible, robotic control behavior without the need for precise reward function tuning.
arXiv Detail & Related papers (2020-02-22T10:15:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.