Eco-driving for Electric Connected Vehicles at Signalized Intersections:
A Parameterized Reinforcement Learning approach
- URL: http://arxiv.org/abs/2206.12065v1
- Date: Fri, 24 Jun 2022 04:11:28 GMT
- Title: Eco-driving for Electric Connected Vehicles at Signalized Intersections:
A Parameterized Reinforcement Learning approach
- Authors: Xia Jiang, Jian Zhang, Dan Li
- Abstract summary: This paper proposes an eco-driving framework for electric connected vehicles (CVs) based on reinforcement learning (RL)
We show that our strategy can significantly reduce energy consumption by learning proper action schemes without any interruption of other human-driven vehicles (HDVs)
- Score: 6.475252042082737
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper proposes an eco-driving framework for electric connected vehicles
(CVs) based on reinforcement learning (RL) to improve vehicle energy efficiency
at signalized intersections. The vehicle agent is specified by integrating the
model-based car-following policy, lane-changing policy, and the RL policy, to
ensure safe operation of a CV. Subsequently, a Markov Decision Process (MDP) is
formulated, which enables the vehicle to perform longitudinal control and
lateral decisions, jointly optimizing the car-following and lane-changing
behaviors of the CVs in the vicinity of intersections. Then, the hybrid action
space is parameterized as a hierarchical structure and thereby trains the
agents with two-dimensional motion patterns in a dynamic traffic environment.
Finally, our proposed methods are evaluated in SUMO software from both a
single-vehicle-based perspective and a flow-based perspective. The results show
that our strategy can significantly reduce energy consumption by learning
proper action schemes without any interruption of other human-driven vehicles
(HDVs).
Related papers
- SPformer: A Transformer Based DRL Decision Making Method for Connected Automated Vehicles [9.840325772591024]
We propose a CAV decision-making architecture based on transformer and reinforcement learning algorithms.
A learnable policy token is used as the learning medium of the multi-vehicle joint policy.
Our model can make good use of all the state information of vehicles in traffic scenario.
arXiv Detail & Related papers (2024-09-23T15:16:35Z) - Towards Interactive and Learnable Cooperative Driving Automation: a Large Language Model-Driven Decision-Making Framework [79.088116316919]
Connected Autonomous Vehicles (CAVs) have begun to open road testing around the world, but their safety and efficiency performance in complex scenarios is still not satisfactory.
This paper proposes CoDrivingLLM, an interactive and learnable LLM-driven cooperative driving framework.
arXiv Detail & Related papers (2024-09-19T14:36:00Z) - A Holistic Framework Towards Vision-based Traffic Signal Control with
Microscopic Simulation [53.39174966020085]
Traffic signal control (TSC) is crucial for reducing traffic congestion that leads to smoother traffic flow, reduced idling time, and mitigated CO2 emissions.
In this study, we explore the computer vision approach for TSC that modulates on-road traffic flows through visual observation.
We introduce a holistic traffic simulation framework called TrafficDojo towards vision-based TSC and its benchmarking.
arXiv Detail & Related papers (2024-03-11T16:42:29Z) - Interaction-Aware Decision-Making for Autonomous Vehicles in Forced
Merging Scenario Leveraging Social Psychology Factors [7.812717451846781]
We consider a behavioral model that incorporates both social behaviors and personal objectives of the interacting drivers.
We develop a receding-horizon control-based decision-making strategy that estimates online the other drivers' intentions.
arXiv Detail & Related papers (2023-09-25T19:49:14Z) - Robust Driving Policy Learning with Guided Meta Reinforcement Learning [49.860391298275616]
We introduce an efficient method to train diverse driving policies for social vehicles as a single meta-policy.
By randomizing the interaction-based reward functions of social vehicles, we can generate diverse objectives and efficiently train the meta-policy.
We propose a training strategy to enhance the robustness of the ego vehicle's driving policy using the environment where social vehicles are controlled by the learned meta-policy.
arXiv Detail & Related papers (2023-07-19T17:42:36Z) - Unified Automatic Control of Vehicular Systems with Reinforcement
Learning [64.63619662693068]
This article contributes a streamlined methodology for vehicular microsimulation.
It discovers high performance control strategies with minimal manual design.
The study reveals numerous emergent behaviors resembling wave mitigation, traffic signaling, and ramp metering.
arXiv Detail & Related papers (2022-07-30T16:23:45Z) - Learning to Help Emergency Vehicles Arrive Faster: A Cooperative
Vehicle-Road Scheduling Approach [24.505687255063986]
Vehicle-centric scheduling approaches recommend optimal paths for emergency vehicles.
Road-centric scheduling approaches aim to improve the traffic condition and assign a higher priority for EVs to pass an intersection.
We propose LEVID, a cooperative VehIcle-roaD scheduling approach including a real-time route planning module and a collaborative traffic signal control module.
arXiv Detail & Related papers (2022-02-20T10:25:15Z) - Hybrid Reinforcement Learning-Based Eco-Driving Strategy for Connected
and Automated Vehicles at Signalized Intersections [3.401874022426856]
Vision-perceptive methods are integrated with vehicle-to-infrastructure (V2I) communications to achieve higher mobility and energy efficiency.
HRL framework has three components: a rule-based driving manager that operates the collaboration between the rule-based policies and the RL policy.
Experiments show that our HRL method can reduce energy consumption by 12.70% and save 11.75% travel time when compared with a state-of-the-art model-based Eco-Driving approach.
arXiv Detail & Related papers (2022-01-19T19:31:12Z) - Learning Interaction-aware Guidance Policies for Motion Planning in
Dense Traffic Scenarios [8.484564880157148]
This paper presents a novel framework for interaction-aware motion planning in dense traffic scenarios.
We propose to learn, via deep Reinforcement Learning (RL), an interaction-aware policy providing global guidance about the cooperativeness of other vehicles.
The learned policy can reason and guide the local optimization-based planner with interactive behavior to pro-actively merge in dense traffic while remaining safe in case the other vehicles do not yield.
arXiv Detail & Related papers (2021-07-09T16:43:12Z) - Multi-Modal Fusion Transformer for End-to-End Autonomous Driving [59.60483620730437]
We propose TransFuser, a novel Multi-Modal Fusion Transformer, to integrate image and LiDAR representations using attention.
Our approach achieves state-of-the-art driving performance while reducing collisions by 76% compared to geometry-based fusion.
arXiv Detail & Related papers (2021-04-19T11:48:13Z) - Deep Structured Reactive Planning [94.92994828905984]
We propose a novel data-driven, reactive planning objective for self-driving vehicles.
We show that our model outperforms a non-reactive variant in successfully completing highly complex maneuvers.
arXiv Detail & Related papers (2021-01-18T01:43:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.