Efficient Representation for Electric Vehicle Charging Station
Operations using Reinforcement Learning
- URL: http://arxiv.org/abs/2108.03236v1
- Date: Sat, 7 Aug 2021 00:34:48 GMT
- Title: Efficient Representation for Electric Vehicle Charging Station
Operations using Reinforcement Learning
- Authors: Kyung-bin Kwon, Hao Zhu
- Abstract summary: We develop aggregation schemes that are based on the emergency of EV charging, namely the laxity value.
A least-laxity first (LLF) rule is adopted to consider only the total charging power of the EVCS.
In addition, we propose an equivalent state aggregation that can guarantee to attain the same optimal policy.
- Score: 5.815007821143811
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Effectively operating electrical vehicle charging station (EVCS) is crucial
for enabling the rapid transition of electrified transportation. To solve this
problem using reinforcement learning (RL), the dimension of state/action spaces
scales with the number of EVs and is thus very large and time-varying. This
dimensionality issue affects the efficiency and convergence properties of
generic RL algorithms. We develop aggregation schemes that are based on the
emergency of EV charging, namely the laxity value. A least-laxity first (LLF)
rule is adopted to consider only the total charging power of the EVCS which
ensures the feasibility of individual EV schedules. In addition, we propose an
equivalent state aggregation that can guarantee to attain the same optimal
policy. Based on the proposed representation, policy gradient method is used to
find the best parameters for the linear Gaussian policy . Numerical results
have validated the performance improvement of the proposed representation
approaches in attaining higher rewards and more effective policies as compared
to existing approximation based approach.
Related papers
- Safety-Aware Reinforcement Learning for Electric Vehicle Charging Station Management in Distribution Network [4.842172685255376]
Electric vehicles (EVs) pose a significant risk to the distribution system operation in the absence of coordination.
This paper presents a safety-aware reinforcement learning (RL) algorithm designed to manage EV charging stations.
Our proposed algorithm does not rely on explicit penalties for constraint violations, eliminating the need for penalty tuning coefficient.
arXiv Detail & Related papers (2024-03-20T01:57:38Z) - Transforming Image Super-Resolution: A ConvFormer-based Efficient Approach [58.57026686186709]
We introduce the Convolutional Transformer layer (ConvFormer) and propose a ConvFormer-based Super-Resolution network (CFSR)
CFSR inherits the advantages of both convolution-based and transformer-based approaches.
Experiments demonstrate that CFSR strikes an optimal balance between computational cost and performance.
arXiv Detail & Related papers (2024-01-11T03:08:00Z) - Hybrid Reinforcement Learning for Optimizing Pump Sustainability in
Real-World Water Distribution Networks [55.591662978280894]
This article addresses the pump-scheduling optimization problem to enhance real-time control of real-world water distribution networks (WDNs)
Our primary objectives are to adhere to physical operational constraints while reducing energy consumption and operational costs.
Traditional optimization techniques, such as evolution-based and genetic algorithms, often fall short due to their lack of convergence guarantees.
arXiv Detail & Related papers (2023-10-13T21:26:16Z) - Federated Reinforcement Learning for Electric Vehicles Charging Control
on Distribution Networks [42.04263644600909]
Multi-agent deep reinforcement learning (MADRL) has proven its effectiveness in EV charging control.
Existing MADRL-based approaches fail to consider the natural power flow of EV charging/discharging in the distribution network.
This paper proposes a novel approach that combines multi-EV charging/discharging with a radial distribution network (RDN) operating under optimal power flow.
arXiv Detail & Related papers (2023-08-17T05:34:46Z) - DClEVerNet: Deep Combinatorial Learning for Efficient EV Charging
Scheduling in Large-scale Networked Facilities [5.78463306498655]
Electric vehicles (EVs) might stress distribution networks significantly, leaving their performance degraded and jeopardized stability.
Modern power grids require coordinated or smart'' charging strategies capable of optimizing EV charging scheduling in a scalable and efficient fashion.
We formulate a time-coupled binary optimization problem that maximizes EV users' total welfare gain while accounting for the network's available power capacity and stations' occupancy limits.
arXiv Detail & Related papers (2023-05-18T14:03:47Z) - Offline Policy Optimization in RL with Variance Regularizaton [142.87345258222942]
We propose variance regularization for offline RL algorithms, using stationary distribution corrections.
We show that by using Fenchel duality, we can avoid double sampling issues for computing the gradient of the variance regularizer.
The proposed algorithm for offline variance regularization (OVAR) can be used to augment any existing offline policy optimization algorithms.
arXiv Detail & Related papers (2022-12-29T18:25:01Z) - A Deep Reinforcement Learning-Based Charging Scheduling Approach with
Augmented Lagrangian for Electric Vehicle [2.686271754751717]
This paper formulates the EV charging scheduling problem as a constrained Markov decision process (CMDP)
A novel safe off-policy reinforcement learning (RL) approach is proposed in this paper to solve the CMDP.
Comprehensive numerical experiments with real-world electricity price demonstrate that our proposed algorithm can achieve high solution optimality and constraints compliance.
arXiv Detail & Related papers (2022-09-20T14:56:51Z) - Data-Driven Chance Constrained AC-OPF using Hybrid Sparse Gaussian
Processes [57.70237375696411]
The paper proposes a fast data-driven setup that uses the sparse and hybrid Gaussian processes (GP) framework to model the power flow equations with input uncertainty.
We advocate the efficiency of the proposed approach by a numerical study over multiple IEEE test cases showing up to two times faster and more accurate solutions.
arXiv Detail & Related papers (2022-08-30T09:27:59Z) - Data-Driven Stochastic AC-OPF using Gaussian Processes [54.94701604030199]
Integrating a significant amount of renewables into a power grid is probably the most a way to reduce carbon emissions from power grids slow down climate change.
This paper presents an alternative data-driven approach based on the AC power flow equations that can incorporate uncertainty inputs.
The GP approach learns a simple yet non-constrained data-driven approach to close this gap to the AC power flow equations.
arXiv Detail & Related papers (2022-07-21T23:02:35Z) - Learning to Operate an Electric Vehicle Charging Station Considering
Vehicle-grid Integration [4.855689194518905]
We propose a novel centralized allocation and decentralized execution (CADE) reinforcement learning (RL) framework to maximize the charging station's profit.
In the centralized allocation process, EVs are allocated to either the waiting or charging spots. In the decentralized execution process, each charger makes its own charging/discharging decision while learning the action-value functions from a shared replay memory.
Numerical results show that the proposed CADE framework is both computationally efficient and scalable, and significantly outperforms the baseline model predictive control (MPC)
arXiv Detail & Related papers (2021-11-01T23:10:28Z) - Optimization-driven Deep Reinforcement Learning for Robust Beamforming
in IRS-assisted Wireless Communications [54.610318402371185]
Intelligent reflecting surface (IRS) is a promising technology to assist downlink information transmissions from a multi-antenna access point (AP) to a receiver.
We minimize the AP's transmit power by a joint optimization of the AP's active beamforming and the IRS's passive beamforming.
We propose a deep reinforcement learning (DRL) approach that can adapt the beamforming strategies from past experiences.
arXiv Detail & Related papers (2020-05-25T01:42:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.