EnduRL: Enhancing Safety, Stability, and Efficiency of Mixed Traffic Under Real-World Perturbations Via Reinforcement Learning
- URL: http://arxiv.org/abs/2311.12261v2
- Date: Sun, 24 Mar 2024 14:18:36 GMT
- Title: EnduRL: Enhancing Safety, Stability, and Efficiency of Mixed Traffic Under Real-World Perturbations Via Reinforcement Learning
- Authors: Bibek Poudel, Weizi Li, Kevin Heaslip,
- Abstract summary: We analyze real-world driving trajectories and extract a wide range of acceleration profiles.
We then incorporates these profiles into simulations for training RVs to mitigate congestion.
Our RVs demonstrate significant improvements: safety by up to 66%, efficiency by up to 54%, and stability by up to 97%.
- Score: 1.7273380623090846
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Human-driven vehicles (HVs) amplify naturally occurring perturbations in traffic, leading to congestion--a major contributor to increased fuel consumption, higher collision risks, and reduced road capacity utilization. While previous research demonstrates that Robot Vehicles (RVs) can be leveraged to mitigate these issues, most such studies rely on simulations with simplistic models of human car-following behaviors. In this work, we analyze real-world driving trajectories and extract a wide range of acceleration profiles. We then incorporates these profiles into simulations for training RVs to mitigate congestion. We evaluate the safety, efficiency, and stability of mixed traffic via comprehensive experiments conducted in two mixed traffic environments (Ring and Bottleneck) at various traffic densities, configurations, and RV penetration rates. The results show that under real-world perturbations, prior RV controllers experience performance degradation on all three objectives (sometimes even lower than 100% HVs). To address this, we introduce a reinforcement learning based RV that employs a congestion stage classifier to optimize the safety, efficiency, and stability of mixed traffic. Our RVs demonstrate significant improvements: safety by up to 66%, efficiency by up to 54%, and stability by up to 97%.
Related papers
- CAV-AHDV-CAV: Mitigating Traffic Oscillations for CAVs through a Novel Car-Following Structure and Reinforcement Learning [8.63981338420553]
Connected and Automated Vehicles (CAVs) offer a promising solution to the challenges of mixed traffic with both CAVs and Human-Driven Vehicles (HDVs)
While HDVs rely on limited information, CAVs can leverage data from other CAVs for better decision-making.
We propose a novel "CAV-AHDV-CAV" car-following framework that treats the sequence of HDVs between two CAVs as a single entity.
arXiv Detail & Related papers (2024-06-23T15:38:29Z) - Queue-based Eco-Driving at Roundabouts with Reinforcement Learning [0.0]
We address eco-driving at roundabouts in mixed traffic to enhance traffic flow and traffic efficiency.
We develop two approaches: a rule-based and an Reinforcement Learning based eco-driving system.
Results show that both approaches outperform the baseline.
arXiv Detail & Related papers (2024-05-01T16:48:28Z) - Reinforcement Learning with Latent State Inference for Autonomous On-ramp Merging under Observation Delay [6.0111084468944]
We introduce the Lane-keeping, Lane-changing with Latent-state Inference and Safety Controller (L3IS) agent.
L3IS is designed to perform the on-ramp merging task safely without comprehensive knowledge about surrounding vehicles' intents or driving styles.
We present an augmentation of this agent called AL3IS that accounts for observation delays, allowing the agent to make more robust decisions in real-world environments.
arXiv Detail & Related papers (2024-03-18T15:02:46Z) - Autonomous and Human-Driven Vehicles Interacting in a Roundabout: A
Quantitative and Qualitative Evaluation [34.67306374722473]
We learn a policy to minimize traffic jams and to minimize pollution in a roundabout in Milan, Italy.
We qualitatively evaluate the learned policy using a cutting-edge cockpit to assess its performance in near-real-world conditions.
Our findings show that human-driven vehicles benefit from optimizing AVs dynamics.
arXiv Detail & Related papers (2023-09-15T09:02:16Z) - A Counterfactual Safety Margin Perspective on the Scoring of Autonomous
Vehicles' Riskiness [52.27309191283943]
This paper presents a data-driven framework for assessing the risk of different AVs' behaviors.
We propose the notion of counterfactual safety margin, which represents the minimum deviation from nominal behavior that could cause a collision.
arXiv Detail & Related papers (2023-08-02T09:48:08Z) - iPLAN: Intent-Aware Planning in Heterogeneous Traffic via Distributed
Multi-Agent Reinforcement Learning [57.24340061741223]
We introduce a distributed multi-agent reinforcement learning (MARL) algorithm that can predict trajectories and intents in dense and heterogeneous traffic scenarios.
Our approach for intent-aware planning, iPLAN, allows agents to infer nearby drivers' intents solely from their local observations.
arXiv Detail & Related papers (2023-06-09T20:12:02Z) - Studying the Impact of Semi-Cooperative Drivers on Overall Highway Flow [76.38515853201116]
Semi-cooperative behaviors are intrinsic properties of human drivers and should be considered for autonomous driving.
New autonomous planners can consider the social value orientation (SVO) of human drivers to generate socially-compliant trajectories.
We present study of implicit semi-cooperative driving where agents deploy a game-theoretic version of iterative best response.
arXiv Detail & Related papers (2023-04-23T16:01:36Z) - Learning to Control and Coordinate Mixed Traffic Through Robot Vehicles at Complex and Unsignalized Intersections [33.0086333735748]
We propose a multi-agent reinforcement learning approach for the control and coordination of mixed traffic by RVs at real-world, complex intersections.
Our method can prevent congestion formation via merely 5% RVs under a real-world traffic demand of 700 vehicles per hour.
Our method is robust against blackout events, sudden RV percentage drops, and V2V communication error.
arXiv Detail & Related papers (2023-01-12T21:09:58Z) - Unified Automatic Control of Vehicular Systems with Reinforcement
Learning [64.63619662693068]
This article contributes a streamlined methodology for vehicular microsimulation.
It discovers high performance control strategies with minimal manual design.
The study reveals numerous emergent behaviors resembling wave mitigation, traffic signaling, and ramp metering.
arXiv Detail & Related papers (2022-07-30T16:23:45Z) - Learning energy-efficient driving behaviors by imitating experts [75.12960180185105]
This paper examines the role of imitation learning in bridging the gap between control strategies and realistic limitations in communication and sensing.
We show that imitation learning can succeed in deriving policies that, if adopted by 5% of vehicles, may boost the energy-efficiency of networks with varying traffic conditions by 15% using only local observations.
arXiv Detail & Related papers (2022-06-28T17:08:31Z) - Cautious Adaptation For Reinforcement Learning in Safety-Critical
Settings [129.80279257258098]
Reinforcement learning (RL) in real-world safety-critical target settings like urban driving is hazardous.
We propose a "safety-critical adaptation" task setting: an agent first trains in non-safety-critical "source" environments.
We propose a solution approach, CARL, that builds on the intuition that prior experience in diverse environments equips an agent to estimate risk.
arXiv Detail & Related papers (2020-08-15T01:40:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.