Robust Electric Vehicle Balancing of Autonomous Mobility-On-Demand
System: A Multi-Agent Reinforcement Learning Approach
- URL: http://arxiv.org/abs/2307.16228v1
- Date: Sun, 30 Jul 2023 13:40:42 GMT
- Title: Robust Electric Vehicle Balancing of Autonomous Mobility-On-Demand
System: A Multi-Agent Reinforcement Learning Approach
- Authors: Sihong He, Shuo Han, Fei Miao
- Abstract summary: Electric autonomous vehicles (EAVs) are getting attention in future autonomous mobility-on-demand (AMoD) systems.
EAVs' unique charging patterns make it challenging to accurately predict the EAVs supply in E-AMoD systems.
Despite the success of reinforcement learning-based E-AMoD balancing algorithms, state uncertainties under the EV supply or mobility demand remain unexplored.
- Score: 6.716627474314613
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Electric autonomous vehicles (EAVs) are getting attention in future
autonomous mobility-on-demand (AMoD) systems due to their economic and societal
benefits. However, EAVs' unique charging patterns (long charging time, high
charging frequency, unpredictable charging behaviors, etc.) make it challenging
to accurately predict the EAVs supply in E-AMoD systems. Furthermore, the
mobility demand's prediction uncertainty makes it an urgent and challenging
task to design an integrated vehicle balancing solution under supply and demand
uncertainties. Despite the success of reinforcement learning-based E-AMoD
balancing algorithms, state uncertainties under the EV supply or mobility
demand remain unexplored. In this work, we design a multi-agent reinforcement
learning (MARL)-based framework for EAVs balancing in E-AMoD systems, with
adversarial agents to model both the EAVs supply and mobility demand
uncertainties that may undermine the vehicle balancing solutions. We then
propose a robust E-AMoD Balancing MARL (REBAMA) algorithm to train a robust
EAVs balancing policy to balance both the supply-demand ratio and charging
utilization rate across the whole city. Experiments show that our proposed
robust method performs better compared with a non-robust MARL method that does
not consider state uncertainties; it improves the reward, charging utilization
fairness, and supply-demand fairness by 19.28%, 28.18%, and 3.97%,
respectively. Compared with a robust optimization-based method, the proposed
MARL algorithm can improve the reward, charging utilization fairness, and
supply-demand fairness by 8.21%, 8.29%, and 9.42%, respectively.
Related papers
- MetaTrading: An Immersion-Aware Model Trading Framework for Vehicular Metaverse Services [94.61039892220037]
We present a novel immersion-aware model trading framework that incentivizes metaverse users (MUs) to contribute learning models for augmented reality (AR) services in the vehicular metaverse.
Considering dynamic network conditions and privacy concerns, we formulate the reward decisions of MSPs as a multi-agent Markov decision process.
Experimental results demonstrate that the proposed framework can effectively provide higher-value models for object detection and classification in AR services on real AR-related vehicle datasets.
arXiv Detail & Related papers (2024-10-25T16:20:46Z) - Controlling Large Electric Vehicle Charging Stations via User Behavior Modeling and Stochastic Programming [0.0]
This paper introduces an Electric Vehicle Charging Station model that incorporates real-world constraints.
We propose two Multi-Stage Programming approaches that leverage user-provided information.
A user's behavior model based on a sojourn-time-dependent process enhances cost reduction while maintaining customer satisfaction.
arXiv Detail & Related papers (2024-02-20T18:37:11Z) - A Deep Q-Learning based Smart Scheduling of EVs for Demand Response in
Smart Grids [0.0]
We propose a model-free solution, leveraging Deep Q-Learning to schedule the charging and discharging activities of EVs within a microgrid.
We adapted the Bellman Equation to assess the value of a state based on specific rewards for EV scheduling actions and used a neural network to estimate Q-values for available actions and the epsilon-greedy algorithm to balance exploitation and exploration to meet the target energy profile.
arXiv Detail & Related papers (2024-01-05T06:04:46Z) - Charge Manipulation Attacks Against Smart Electric Vehicle Charging Stations and Deep Learning-based Detection Mechanisms [49.37592437398933]
"Smart" electric vehicle charging stations (EVCSs) will be a key step toward achieving green transportation.
We investigate charge manipulation attacks (CMAs) against EV charging, in which an attacker manipulates the information exchanged during smart charging operations.
We propose an unsupervised deep learning-based mechanism to detect CMAs by monitoring the parameters involved in EV charging.
arXiv Detail & Related papers (2023-10-18T18:38:59Z) - Multi-Objective Optimization for UAV Swarm-Assisted IoT with Virtual
Antenna Arrays [55.736718475856726]
Unmanned aerial vehicle (UAV) network is a promising technology for assisting Internet-of-Things (IoT)
Existing UAV-assisted data harvesting and dissemination schemes require UAVs to frequently fly between the IoTs and access points.
We introduce collaborative beamforming into IoTs and UAVs simultaneously to achieve energy and time-efficient data harvesting and dissemination.
arXiv Detail & Related papers (2023-08-03T02:49:50Z) - A Robust and Constrained Multi-Agent Reinforcement Learning Electric
Vehicle Rebalancing Method in AMoD Systems [20.75789597995344]
Electric vehicles (EVs) play critical roles in autonomous mobility-on-demand (AMoD) systems.
Their unique charging patterns increase the model uncertainties in AMoD systems.
Model uncertainties have not been considered explicitly in EV AMoD system rebalancing.
arXiv Detail & Related papers (2022-09-17T03:24:10Z) - A new Hyper-heuristic based on Adaptive Simulated Annealing and
Reinforcement Learning for the Capacitated Electric Vehicle Routing Problem [9.655068751758952]
Electric vehicles (EVs) have been adopted in urban areas to reduce environmental pollution and global warming.
There are still deficiencies in routing the trajectories of last-mile logistics that continue to impact social and economic sustainability.
This paper proposes a hyper-heuristic approach called Hyper-heuristic Adaptive Simulated Annealing with Reinforcement Learning.
arXiv Detail & Related papers (2022-06-07T11:10:38Z) - Risk Adversarial Learning System for Connected and Autonomous Vehicle
Charging [43.42105971560163]
We study the design of a rational decision support system (RDSS) for a connected and autonomous vehicle charging infrastructure (CAV-CI)
In the considered CAV-CI, the distribution system operator (DSO) deploys electric vehicle supply equipment (EVSE) to provide an EV charging facility for human-driven connected vehicles (CVs) and autonomous vehicles (AVs)
The charging request by the human-driven EV becomes irrational when it demands more energy and charging period than its actual need.
We propose a novel risk adversarial multi-agent learning system (ALS) for CAV-CI to solve
arXiv Detail & Related papers (2021-08-02T02:38:15Z) - Efficient UAV Trajectory-Planning using Economic Reinforcement Learning [65.91405908268662]
We introduce REPlanner, a novel reinforcement learning algorithm inspired by economic transactions to distribute tasks between UAVs.
We formulate the path planning problem as a multi-agent economic game, where agents can cooperate and compete for resources.
As the system computes task distributions via UAV cooperation, it is highly resilient to any change in the swarm size.
arXiv Detail & Related papers (2021-03-03T20:54:19Z) - Multi-Agent Meta-Reinforcement Learning for Self-Powered and Sustainable
Edge Computing Systems [87.4519172058185]
An effective energy dispatch mechanism for self-powered wireless networks with edge computing capabilities is studied.
A novel multi-agent meta-reinforcement learning (MAMRL) framework is proposed to solve the formulated problem.
Experimental results show that the proposed MAMRL model can reduce up to 11% non-renewable energy usage and by 22.4% the energy cost.
arXiv Detail & Related papers (2020-02-20T04:58:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.