Unified Automatic Control of Vehicular Systems with Reinforcement
Learning
- URL: http://arxiv.org/abs/2208.00268v1
- Date: Sat, 30 Jul 2022 16:23:45 GMT
- Title: Unified Automatic Control of Vehicular Systems with Reinforcement
Learning
- Authors: Zhongxia Yan, Abdul Rahman Kreidieh, Eugene Vinitsky, Alexandre M.
Bayen, Cathy Wu
- Abstract summary: This article contributes a streamlined methodology for vehicular microsimulation.
It discovers high performance control strategies with minimal manual design.
The study reveals numerous emergent behaviors resembling wave mitigation, traffic signaling, and ramp metering.
- Score: 64.63619662693068
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Emerging vehicular systems with increasing proportions of automated
components present opportunities for optimal control to mitigate congestion and
increase efficiency. There has been a recent interest in applying deep
reinforcement learning (DRL) to these nonlinear dynamical systems for the
automatic design of effective control strategies. Despite conceptual advantages
of DRL being model-free, studies typically nonetheless rely on training setups
that are painstakingly specialized to specific vehicular systems. This is a key
challenge to efficient analysis of diverse vehicular and mobility systems. To
this end, this article contributes a streamlined methodology for vehicular
microsimulation and discovers high performance control strategies with minimal
manual design. A variable-agent, multi-task approach is presented for
optimization of vehicular Partially Observed Markov Decision Processes. The
methodology is experimentally validated on mixed autonomy traffic systems,
where fractions of vehicles are automated; empirical improvement, typically
15-60% over a human driving baseline, is observed in all configurations of six
diverse open or closed traffic systems. The study reveals numerous emergent
behaviors resembling wave mitigation, traffic signaling, and ramp metering.
Finally, the emergent behaviors are analyzed to produce interpretable control
strategies, which are validated against the learned control strategies.
Related papers
- Generalizing Cooperative Eco-driving via Multi-residual Task Learning [6.864745785996583]
Multi-residual Task Learning (MRTL) is a generic learning framework based on multi-task learning.
MRTL decomposes control into nominal components that are effectively solved by conventional control methods and residual terms.
We employ MRTL for fleet-level emission reduction in mixed traffic using autonomous vehicles as a means of system control.
arXiv Detail & Related papers (2024-03-07T05:25:34Z) - Modelling, Positioning, and Deep Reinforcement Learning Path Tracking
Control of Scaled Robotic Vehicles: Design and Experimental Validation [3.807917169053206]
Scaled robotic cars are commonly equipped with a hierarchical control acthiecture that includes tasks dedicated to vehicle state estimation and control.
This paper covers both aspects by proposing (i) a federeted extended Kalman filter (FEKF) and (ii) a novel deep reinforcement learning (DRL) path tracking controller trained via an expert demonstrator.
The experimentally validated model is used for (i) supporting the design of the FEKF and (ii) serving as a digital twin for training the proposed DRL-based path tracking algorithm.
arXiv Detail & Related papers (2024-01-10T14:40:53Z) - Model-free Learning of Corridor Clearance: A Near-term Deployment
Perspective [5.39179984304986]
An emerging public health application of connected and automated vehicle (CAV) technologies is to reduce response times of emergency medical service (EMS) by indirectly coordinating traffic.
Existing research on this topic often overlooks the impact of EMS vehicle disruptions on regular traffic, assumes 100% CAV penetration, relies on real-time traffic signal timing data and queue lengths at intersections, and makes various assumptions about traffic settings when deriving optimal model-based CAV control strategies.
To overcome these challenges and enhance real-world applicability in near-term, we propose a model-free approach employing deep reinforcement learning (DRL) for designing CAV control strategies
arXiv Detail & Related papers (2023-12-16T06:08:53Z) - NeuroFlow: Development of lightweight and efficient model integration
scheduling strategy for autonomous driving system [0.0]
This paper proposes a specialized autonomous driving system that takes into account the unique constraints and characteristics of automotive systems.
The proposed system systematically analyzes the intricate data flow in autonomous driving and provides functionality to dynamically adjust various factors that influence deep learning models.
arXiv Detail & Related papers (2023-12-15T07:51:20Z) - FastRLAP: A System for Learning High-Speed Driving via Deep RL and
Autonomous Practicing [71.76084256567599]
We present a system that enables an autonomous small-scale RC car to drive aggressively from visual observations using reinforcement learning (RL)
Our system, FastRLAP (faster lap), trains autonomously in the real world, without human interventions, and without requiring any simulation or expert demonstrations.
The resulting policies exhibit emergent aggressive driving skills, such as timing braking and acceleration around turns and avoiding areas which impede the robot's motion, approaching the performance of a human driver using a similar first-person interface over the course of training.
arXiv Detail & Related papers (2023-04-19T17:33:47Z) - Learning-based Online Optimization for Autonomous Mobility-on-Demand
Fleet Control [8.020856741504794]
We study online control algorithms for autonomous mobility-on-demand systems.
We develop a novel hybrid enriched machine learning pipeline which learns online dispatching and rebalancing policies.
We show that our approach outperforms state-of-the-art greedy, and model-predictive control approaches.
arXiv Detail & Related papers (2023-02-08T09:40:30Z) - Tackling Real-World Autonomous Driving using Deep Reinforcement Learning [63.3756530844707]
In this work, we propose a model-free Deep Reinforcement Learning Planner training a neural network that predicts acceleration and steering angle.
In order to deploy the system on board the real self-driving car, we also develop a module represented by a tiny neural network.
arXiv Detail & Related papers (2022-07-05T16:33:20Z) - Learning energy-efficient driving behaviors by imitating experts [75.12960180185105]
This paper examines the role of imitation learning in bridging the gap between control strategies and realistic limitations in communication and sensing.
We show that imitation learning can succeed in deriving policies that, if adopted by 5% of vehicles, may boost the energy-efficiency of networks with varying traffic conditions by 15% using only local observations.
arXiv Detail & Related papers (2022-06-28T17:08:31Z) - Isolating and Leveraging Controllable and Noncontrollable Visual
Dynamics in World Models [65.97707691164558]
We present Iso-Dream, which improves the Dream-to-Control framework in two aspects.
First, by optimizing inverse dynamics, we encourage world model to learn controllable and noncontrollable sources.
Second, we optimize the behavior of the agent on the decoupled latent imaginations of the world model.
arXiv Detail & Related papers (2022-05-27T08:07:39Z) - Optimizing Mixed Autonomy Traffic Flow With Decentralized Autonomous
Vehicles and Multi-Agent RL [63.52264764099532]
We study the ability of autonomous vehicles to improve the throughput of a bottleneck using a fully decentralized control scheme in a mixed autonomy setting.
We apply multi-agent reinforcement algorithms to this problem and demonstrate that significant improvements in bottleneck throughput, from 20% at a 5% penetration rate to 33% at a 40% penetration rate, can be achieved.
arXiv Detail & Related papers (2020-10-30T22:06:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.