Learning to Operate an Electric Vehicle Charging Station Considering
Vehicle-grid Integration
- URL: http://arxiv.org/abs/2111.01294v1
- Date: Mon, 1 Nov 2021 23:10:28 GMT
- Title: Learning to Operate an Electric Vehicle Charging Station Considering
Vehicle-grid Integration
- Authors: Zuzhao Ye, Yuanqi Gao, Nanpeng Yu
- Abstract summary: We propose a novel centralized allocation and decentralized execution (CADE) reinforcement learning (RL) framework to maximize the charging station's profit.
In the centralized allocation process, EVs are allocated to either the waiting or charging spots. In the decentralized execution process, each charger makes its own charging/discharging decision while learning the action-value functions from a shared replay memory.
Numerical results show that the proposed CADE framework is both computationally efficient and scalable, and significantly outperforms the baseline model predictive control (MPC)
- Score: 4.855689194518905
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The rapid adoption of electric vehicles (EVs) calls for the widespread
installation of EV charging stations. To maximize the profitability of charging
stations, intelligent controllers that provide both charging and electric grid
services are in great need. However, it is challenging to determine the optimal
charging schedule due to the uncertain arrival time and charging demands of
EVs. In this paper, we propose a novel centralized allocation and decentralized
execution (CADE) reinforcement learning (RL) framework to maximize the charging
station's profit. In the centralized allocation process, EVs are allocated to
either the waiting or charging spots. In the decentralized execution process,
each charger makes its own charging/discharging decision while learning the
action-value functions from a shared replay memory. This CADE framework
significantly improves the scalability and sample efficiency of the RL
algorithm. Numerical results show that the proposed CADE framework is both
computationally efficient and scalable, and significantly outperforms the
baseline model predictive control (MPC). We also provide an in-depth analysis
of the learned action-value function to explain the inner working of the
reinforcement learning agent.
Related papers
- Centralized vs. Decentralized Multi-Agent Reinforcement Learning for Enhanced Control of Electric Vehicle Charging Networks [1.9188272016043582]
We introduce a novel approach for distributed and cooperative charging strategy using a Multi-Agent Reinforcement Learning (MARL) framework.
Our method is built upon the Deep Deterministic Policy Gradient (DDPG) algorithm for a group of EVs in a residential community.
Our results indicate that, despite higher policy variances and training complexity, the CTDE-DDPG framework significantly improves charging efficiency by reducing total variation by approximately %36 and charging cost by around %9.1 on average.
arXiv Detail & Related papers (2024-04-18T21:50:03Z) - Charge Manipulation Attacks Against Smart Electric Vehicle Charging Stations and Deep Learning-based Detection Mechanisms [49.37592437398933]
"Smart" electric vehicle charging stations (EVCSs) will be a key step toward achieving green transportation.
We investigate charge manipulation attacks (CMAs) against EV charging, in which an attacker manipulates the information exchanged during smart charging operations.
We propose an unsupervised deep learning-based mechanism to detect CMAs by monitoring the parameters involved in EV charging.
arXiv Detail & Related papers (2023-10-18T18:38:59Z) - Fast-ELECTRA for Efficient Pre-training [83.29484808667532]
ELECTRA pre-trains language models by detecting tokens in a sequence that have been replaced by an auxiliary model.
We propose Fast-ELECTRA, which leverages an existing language model as the auxiliary model.
Our approach rivals the performance of state-of-the-art ELECTRA-style pre-training methods, while significantly eliminating the computation and memory cost brought by the joint training of the auxiliary model.
arXiv Detail & Related papers (2023-10-11T09:55:46Z) - An Efficient Distributed Multi-Agent Reinforcement Learning for EV
Charging Network Control [2.5477011559292175]
We introduce a decentralized Multi-agent Reinforcement Learning (MARL) charging framework that prioritizes the preservation of privacy for EV owners.
Our results demonstrate that the CTDE framework improves the performance of the charging network by reducing the network costs.
arXiv Detail & Related papers (2023-08-24T16:53:52Z) - Federated Reinforcement Learning for Electric Vehicles Charging Control
on Distribution Networks [42.04263644600909]
Multi-agent deep reinforcement learning (MADRL) has proven its effectiveness in EV charging control.
Existing MADRL-based approaches fail to consider the natural power flow of EV charging/discharging in the distribution network.
This paper proposes a novel approach that combines multi-EV charging/discharging with a radial distribution network (RDN) operating under optimal power flow.
arXiv Detail & Related papers (2023-08-17T05:34:46Z) - Computationally efficient joint coordination of multiple electric
vehicle charging points using reinforcement learning [6.37470346908743]
A major challenge in todays power grid is to manage the increasing load from electric vehicle (EV) charging.
We propose a single-step solution that jointly coordinates multiple charging points at once.
We show that our new RL solutions still improve the performance of charging demand coordination by 40-50% compared to a business-as-usual policy.
arXiv Detail & Related papers (2022-03-26T13:42:57Z) - Optimized cost function for demand response coordination of multiple EV
charging stations using reinforcement learning [6.37470346908743]
We build on previous research on RL, based on a Markov decision process (MDP) to simultaneously coordinate multiple charging stations.
We propose an improved cost function that essentially forces the learned control policy to always fulfill any charging demand that does not offer flexibility.
We rigorously compare the newly proposed batch RL fitted Q-iteration implementation with the original (costly) one, using real-world data.
arXiv Detail & Related papers (2022-03-03T11:22:27Z) - Efficient Representation for Electric Vehicle Charging Station
Operations using Reinforcement Learning [5.815007821143811]
We develop aggregation schemes that are based on the emergency of EV charging, namely the laxity value.
A least-laxity first (LLF) rule is adopted to consider only the total charging power of the EVCS.
In addition, we propose an equivalent state aggregation that can guarantee to attain the same optimal policy.
arXiv Detail & Related papers (2021-08-07T00:34:48Z) - A Deep Value-network Based Approach for Multi-Driver Order Dispatching [55.36656442934531]
We propose a deep reinforcement learning based solution for order dispatching.
We conduct large scale online A/B tests on DiDi's ride-dispatching platform.
Results show that CVNet consistently outperforms other recently proposed dispatching methods.
arXiv Detail & Related papers (2021-06-08T16:27:04Z) - Demand-Side Scheduling Based on Multi-Agent Deep Actor-Critic Learning
for Smart Grids [56.35173057183362]
We consider the problem of demand-side energy management, where each household is equipped with a smart meter that is able to schedule home appliances online.
The goal is to minimize the overall cost under a real-time pricing scheme.
We propose the formulation of a smart grid environment as a Markov game.
arXiv Detail & Related papers (2020-05-05T07:32:40Z) - Multi-Agent Meta-Reinforcement Learning for Self-Powered and Sustainable
Edge Computing Systems [87.4519172058185]
An effective energy dispatch mechanism for self-powered wireless networks with edge computing capabilities is studied.
A novel multi-agent meta-reinforcement learning (MAMRL) framework is proposed to solve the formulated problem.
Experimental results show that the proposed MAMRL model can reduce up to 11% non-renewable energy usage and by 22.4% the energy cost.
arXiv Detail & Related papers (2020-02-20T04:58:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.