Risk Adversarial Learning System for Connected and Autonomous Vehicle
Charging
- URL: http://arxiv.org/abs/2108.01466v1
- Date: Mon, 2 Aug 2021 02:38:15 GMT
- Title: Risk Adversarial Learning System for Connected and Autonomous Vehicle
Charging
- Authors: Md. Shirajum Munir, Ki Tae Kim, Kyi Thar, Dusit Niyato, and Choong
Seon Hong
- Abstract summary: We study the design of a rational decision support system (RDSS) for a connected and autonomous vehicle charging infrastructure (CAV-CI)
In the considered CAV-CI, the distribution system operator (DSO) deploys electric vehicle supply equipment (EVSE) to provide an EV charging facility for human-driven connected vehicles (CVs) and autonomous vehicles (AVs)
The charging request by the human-driven EV becomes irrational when it demands more energy and charging period than its actual need.
We propose a novel risk adversarial multi-agent learning system (ALS) for CAV-CI to solve
- Score: 43.42105971560163
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, the design of a rational decision support system (RDSS) for a
connected and autonomous vehicle charging infrastructure (CAV-CI) is studied.
In the considered CAV-CI, the distribution system operator (DSO) deploys
electric vehicle supply equipment (EVSE) to provide an EV charging facility for
human-driven connected vehicles (CVs) and autonomous vehicles (AVs). The
charging request by the human-driven EV becomes irrational when it demands more
energy and charging period than its actual need. Therefore, the scheduling
policy of each EVSE must be adaptively accumulated the irrational charging
request to satisfy the charging demand of both CVs and AVs. To tackle this, we
formulate an RDSS problem for the DSO, where the objective is to maximize the
charging capacity utilization by satisfying the laxity risk of the DSO. Thus,
we devise a rational reward maximization problem to adapt the irrational
behavior by CVs in a data-informed manner. We propose a novel risk adversarial
multi-agent learning system (RAMALS) for CAV-CI to solve the formulated RDSS
problem. In RAMALS, the DSO acts as a centralized risk adversarial agent (RAA)
for informing the laxity risk to each EVSE. Subsequently, each EVSE plays the
role of a self-learner agent to adaptively schedule its own EV sessions by
coping advice from RAA. Experiment results show that the proposed RAMALS
affords around 46.6% improvement in charging rate, about 28.6% improvement in
the EVSE's active charging time and at least 33.3% more energy utilization, as
compared to a currently deployed ACN EVSE system, and other baselines.
Related papers
- A Deep Q-Learning based Smart Scheduling of EVs for Demand Response in
Smart Grids [0.0]
We propose a model-free solution, leveraging Deep Q-Learning to schedule the charging and discharging activities of EVs within a microgrid.
We adapted the Bellman Equation to assess the value of a state based on specific rewards for EV scheduling actions and used a neural network to estimate Q-values for available actions and the epsilon-greedy algorithm to balance exploitation and exploration to meet the target energy profile.
arXiv Detail & Related papers (2024-01-05T06:04:46Z) - Charge Manipulation Attacks Against Smart Electric Vehicle Charging Stations and Deep Learning-based Detection Mechanisms [49.37592437398933]
"Smart" electric vehicle charging stations (EVCSs) will be a key step toward achieving green transportation.
We investigate charge manipulation attacks (CMAs) against EV charging, in which an attacker manipulates the information exchanged during smart charging operations.
We propose an unsupervised deep learning-based mechanism to detect CMAs by monitoring the parameters involved in EV charging.
arXiv Detail & Related papers (2023-10-18T18:38:59Z) - An Efficient Distributed Multi-Agent Reinforcement Learning for EV
Charging Network Control [2.5477011559292175]
We introduce a decentralized Multi-agent Reinforcement Learning (MARL) charging framework that prioritizes the preservation of privacy for EV owners.
Our results demonstrate that the CTDE framework improves the performance of the charging network by reducing the network costs.
arXiv Detail & Related papers (2023-08-24T16:53:52Z) - Robust Electric Vehicle Balancing of Autonomous Mobility-On-Demand
System: A Multi-Agent Reinforcement Learning Approach [6.716627474314613]
Electric autonomous vehicles (EAVs) are getting attention in future autonomous mobility-on-demand (AMoD) systems.
EAVs' unique charging patterns make it challenging to accurately predict the EAVs supply in E-AMoD systems.
Despite the success of reinforcement learning-based E-AMoD balancing algorithms, state uncertainties under the EV supply or mobility demand remain unexplored.
arXiv Detail & Related papers (2023-07-30T13:40:42Z) - Reinforcement Learning based Cyberattack Model for Adaptive Traffic
Signal Controller in Connected Transportation Systems [61.39400591328625]
In a connected transportation system, adaptive traffic signal controllers (ATSC) utilize real-time vehicle trajectory data received from vehicles to regulate green time.
This wirelessly connected ATSC increases cyber-attack surfaces and increases their vulnerability to various cyber-attack modes.
One such mode is a'sybil' attack in which an attacker creates fake vehicles in the network.
An RL agent is trained to learn an optimal rate of sybil vehicle injection to create congestion for an approach(s)
arXiv Detail & Related papers (2022-10-31T20:12:17Z) - A new Hyper-heuristic based on Adaptive Simulated Annealing and
Reinforcement Learning for the Capacitated Electric Vehicle Routing Problem [9.655068751758952]
Electric vehicles (EVs) have been adopted in urban areas to reduce environmental pollution and global warming.
There are still deficiencies in routing the trajectories of last-mile logistics that continue to impact social and economic sustainability.
This paper proposes a hyper-heuristic approach called Hyper-heuristic Adaptive Simulated Annealing with Reinforcement Learning.
arXiv Detail & Related papers (2022-06-07T11:10:38Z) - Computationally efficient joint coordination of multiple electric
vehicle charging points using reinforcement learning [6.37470346908743]
A major challenge in todays power grid is to manage the increasing load from electric vehicle (EV) charging.
We propose a single-step solution that jointly coordinates multiple charging points at once.
We show that our new RL solutions still improve the performance of charging demand coordination by 40-50% compared to a business-as-usual policy.
arXiv Detail & Related papers (2022-03-26T13:42:57Z) - An Energy Consumption Model for Electrical Vehicle Networks via Extended
Federated-learning [50.85048976506701]
This paper proposes a novel solution to range anxiety based on a federated-learning model.
It is capable of estimating battery consumption and providing energy-efficient route planning for vehicle networks.
arXiv Detail & Related papers (2021-11-13T15:03:44Z) - Efficient UAV Trajectory-Planning using Economic Reinforcement Learning [65.91405908268662]
We introduce REPlanner, a novel reinforcement learning algorithm inspired by economic transactions to distribute tasks between UAVs.
We formulate the path planning problem as a multi-agent economic game, where agents can cooperate and compete for resources.
As the system computes task distributions via UAV cooperation, it is highly resilient to any change in the swarm size.
arXiv Detail & Related papers (2021-03-03T20:54:19Z) - Cautious Adaptation For Reinforcement Learning in Safety-Critical
Settings [129.80279257258098]
Reinforcement learning (RL) in real-world safety-critical target settings like urban driving is hazardous.
We propose a "safety-critical adaptation" task setting: an agent first trains in non-safety-critical "source" environments.
We propose a solution approach, CARL, that builds on the intuition that prior experience in diverse environments equips an agent to estimate risk.
arXiv Detail & Related papers (2020-08-15T01:40:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.