A Hysteretic Q-learning Coordination Framework for Emerging Mobility
Systems in Smart Cities
- URL: http://arxiv.org/abs/2011.03137v1
- Date: Thu, 5 Nov 2020 23:30:05 GMT
- Title: A Hysteretic Q-learning Coordination Framework for Emerging Mobility
Systems in Smart Cities
- Authors: Behdad Chalaki and Andreas A. Malikopoulos
- Abstract summary: Connected and automated vehicles (CAVs) can alleviate traffic congestion, air pollution, and improve safety.
In this paper, we provide a decentralized coordination framework for CAVs at a signal-free intersection to minimize travel time and improve fuel efficiency.
- Score: 3.563646182996609
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Connected and automated vehicles (CAVs) can alleviate traffic congestion, air
pollution, and improve safety. In this paper, we provide a decentralized
coordination framework for CAVs at a signal-free intersection to minimize
travel time and improve fuel efficiency. We employ a simple yet powerful
reinforcement learning approach, an off-policy temporal difference learning
called Q-learning, enhanced with a coordination mechanism to address this
problem. Then, we integrate a first-in-first-out queuing policy to improve the
performance of our system. We demonstrate the efficacy of our proposed approach
through simulation and comparison with the classical optimal control method
based on Pontryagin's minimum principle.
Related papers
- Semantic Communication for Cooperative Perception using HARQ [51.148203799109304]
We leverage an importance map to distill critical semantic information, introducing a cooperative perception semantic communication framework.
To counter the challenges posed by time-varying multipath fading, our approach incorporates the use of frequency-division multiplexing (OFDM) along with channel estimation and equalization strategies.
We introduce a novel semantic error detection method that is integrated with our semantic communication framework in the spirit of hybrid automatic repeated request (HARQ)
arXiv Detail & Related papers (2024-08-29T08:53:26Z) - Modeling Adaptive Platoon and Reservation Based Autonomous Intersection
Control: A Deep Reinforcement Learning Approach [0.0]
This study proposes an adaptive platoon based autonomous intersection control model powered by deep reinforcement learning (DRL) technique.
When tested on a traffic micro-simulator, our proposed model exhibits superior performances on travel efficiency and fuel conservation as compared to the state-of-the-art methods.
arXiv Detail & Related papers (2022-06-24T08:50:36Z) - Real-time Cooperative Vehicle Coordination at Unsignalized Road
Intersections [7.860567520771493]
Cooperative coordination at unsignalized road intersections aims to improve the safety driving traffic throughput for connected and automated vehicles.
We introduce a model-free Markov Decision Process (MDP) and tackle it by a Twin Delayed Deep Deterministic Policy (TD3)-based strategy in the deep reinforcement learning framework.
We show that the proposed strategy could achieve near-optimal performance in sub-static coordination scenarios and significantly improve control in the realistic continuous flow.
arXiv Detail & Related papers (2022-05-03T02:56:02Z) - Learning to Help Emergency Vehicles Arrive Faster: A Cooperative
Vehicle-Road Scheduling Approach [24.505687255063986]
Vehicle-centric scheduling approaches recommend optimal paths for emergency vehicles.
Road-centric scheduling approaches aim to improve the traffic condition and assign a higher priority for EVs to pass an intersection.
We propose LEVID, a cooperative VehIcle-roaD scheduling approach including a real-time route planning module and a collaborative traffic signal control module.
arXiv Detail & Related papers (2022-02-20T10:25:15Z) - Hybrid Reinforcement Learning-Based Eco-Driving Strategy for Connected
and Automated Vehicles at Signalized Intersections [3.401874022426856]
Vision-perceptive methods are integrated with vehicle-to-infrastructure (V2I) communications to achieve higher mobility and energy efficiency.
HRL framework has three components: a rule-based driving manager that operates the collaboration between the rule-based policies and the RL policy.
Experiments show that our HRL method can reduce energy consumption by 12.70% and save 11.75% travel time when compared with a state-of-the-art model-based Eco-Driving approach.
arXiv Detail & Related papers (2022-01-19T19:31:12Z) - Transferable Deep Reinforcement Learning Framework for Autonomous
Vehicles with Joint Radar-Data Communications [69.24726496448713]
We propose an intelligent optimization framework based on the Markov Decision Process (MDP) to help the AV make optimal decisions.
We then develop an effective learning algorithm leveraging recent advances of deep reinforcement learning techniques to find the optimal policy for the AV.
We show that the proposed transferable deep reinforcement learning framework reduces the obstacle miss detection probability by the AV up to 67% compared to other conventional deep reinforcement learning approaches.
arXiv Detail & Related papers (2021-05-28T08:45:37Z) - End-to-End Intersection Handling using Multi-Agent Deep Reinforcement
Learning [63.56464608571663]
Navigating through intersections is one of the main challenging tasks for an autonomous vehicle.
In this work, we focus on the implementation of a system able to navigate through intersections where only traffic signs are provided.
We propose a multi-agent system using a continuous, model-free Deep Reinforcement Learning algorithm used to train a neural network for predicting both the acceleration and the steering angle at each time step.
arXiv Detail & Related papers (2021-04-28T07:54:40Z) - Optimization-driven Deep Reinforcement Learning for Robust Beamforming
in IRS-assisted Wireless Communications [54.610318402371185]
Intelligent reflecting surface (IRS) is a promising technology to assist downlink information transmissions from a multi-antenna access point (AP) to a receiver.
We minimize the AP's transmit power by a joint optimization of the AP's active beamforming and the IRS's passive beamforming.
We propose a deep reinforcement learning (DRL) approach that can adapt the beamforming strategies from past experiences.
arXiv Detail & Related papers (2020-05-25T01:42:55Z) - Decentralized MCTS via Learned Teammate Models [89.24858306636816]
We present a trainable online decentralized planning algorithm based on decentralized Monte Carlo Tree Search.
We show that deep learning and convolutional neural networks can be employed to produce accurate policy approximators.
arXiv Detail & Related papers (2020-03-19T13:10:20Z) - Learning Scalable Multi-Agent Coordination by Spatial Differentiation
for Traffic Signal Control [8.380832628205372]
We design a multiagent coordination framework based on Deep Reinforcement Learning methods for traffic signal control.
Specifically, we propose the Spatial Differentiation method for coordination which uses the temporal-spatial information in the replay buffer to amend the reward of each action.
arXiv Detail & Related papers (2020-02-27T02:16:00Z) - Information Theoretic Model Predictive Q-Learning [64.74041985237105]
We present a novel theoretical connection between information theoretic MPC and entropy regularized RL.
We develop a Q-learning algorithm that can leverage biased models.
arXiv Detail & Related papers (2019-12-31T00:29:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.