Safety-Aware Reinforcement Learning for Electric Vehicle Charging Station Management in Distribution Network
- URL: http://arxiv.org/abs/2403.13236v1
- Date: Wed, 20 Mar 2024 01:57:38 GMT
- Title: Safety-Aware Reinforcement Learning for Electric Vehicle Charging Station Management in Distribution Network
- Authors: Jiarong Fan, Ariel Liebman, Hao Wang,
- Abstract summary: Electric vehicles (EVs) pose a significant risk to the distribution system operation in the absence of coordination.
This paper presents a safety-aware reinforcement learning (RL) algorithm designed to manage EV charging stations.
Our proposed algorithm does not rely on explicit penalties for constraint violations, eliminating the need for penalty tuning coefficient.
- Score: 4.842172685255376
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The increasing integration of electric vehicles (EVs) into the grid can pose a significant risk to the distribution system operation in the absence of coordination. In response to the need for effective coordination of EVs within the distribution network, this paper presents a safety-aware reinforcement learning (RL) algorithm designed to manage EV charging stations while ensuring the satisfaction of system constraints. Unlike existing methods, our proposed algorithm does not rely on explicit penalties for constraint violations, eliminating the need for penalty coefficient tuning. Furthermore, managing EV charging stations is further complicated by multiple uncertainties, notably the variability in solar energy generation and energy prices. To address this challenge, we develop an off-policy RL algorithm to efficiently utilize data to learn patterns in such uncertain environments. Our algorithm also incorporates a maximum entropy framework to enhance the RL algorithm's exploratory process, preventing convergence to local optimal solutions. Simulation results demonstrate that our algorithm outperforms traditional RL algorithms in managing EV charging in the distribution network.
Related papers
- Optimizing Load Scheduling in Power Grids Using Reinforcement Learning and Markov Decision Processes [0.0]
This paper proposes a reinforcement learning (RL) approach to address the challenges of dynamic load scheduling.
Our results show that the RL-based method provides a robust and scalable solution for real-time load scheduling.
arXiv Detail & Related papers (2024-10-23T09:16:22Z) - Hybrid Reinforcement Learning for Optimizing Pump Sustainability in
Real-World Water Distribution Networks [55.591662978280894]
This article addresses the pump-scheduling optimization problem to enhance real-time control of real-world water distribution networks (WDNs)
Our primary objectives are to adhere to physical operational constraints while reducing energy consumption and operational costs.
Traditional optimization techniques, such as evolution-based and genetic algorithms, often fall short due to their lack of convergence guarantees.
arXiv Detail & Related papers (2023-10-13T21:26:16Z) - An Efficient Distributed Multi-Agent Reinforcement Learning for EV
Charging Network Control [2.5477011559292175]
We introduce a decentralized Multi-agent Reinforcement Learning (MARL) charging framework that prioritizes the preservation of privacy for EV owners.
Our results demonstrate that the CTDE framework improves the performance of the charging network by reducing the network costs.
arXiv Detail & Related papers (2023-08-24T16:53:52Z) - Federated Reinforcement Learning for Electric Vehicles Charging Control
on Distribution Networks [42.04263644600909]
Multi-agent deep reinforcement learning (MADRL) has proven its effectiveness in EV charging control.
Existing MADRL-based approaches fail to consider the natural power flow of EV charging/discharging in the distribution network.
This paper proposes a novel approach that combines multi-EV charging/discharging with a radial distribution network (RDN) operating under optimal power flow.
arXiv Detail & Related papers (2023-08-17T05:34:46Z) - A Safe Genetic Algorithm Approach for Energy Efficient Federated
Learning in Wireless Communication Networks [53.561797148529664]
Federated Learning (FL) has emerged as a decentralized technique, where contrary to traditional centralized approaches, devices perform a model training in a collaborative manner.
Despite the existing efforts made in FL, its environmental impact is still under investigation, since several critical challenges regarding its applicability to wireless networks have been identified.
The current work proposes a Genetic Algorithm (GA) approach, targeting the minimization of both the overall energy consumption of an FL process and any unnecessary resource utilization.
arXiv Detail & Related papers (2023-06-25T13:10:38Z) - DClEVerNet: Deep Combinatorial Learning for Efficient EV Charging
Scheduling in Large-scale Networked Facilities [5.78463306498655]
Electric vehicles (EVs) might stress distribution networks significantly, leaving their performance degraded and jeopardized stability.
Modern power grids require coordinated or smart'' charging strategies capable of optimizing EV charging scheduling in a scalable and efficient fashion.
We formulate a time-coupled binary optimization problem that maximizes EV users' total welfare gain while accounting for the network's available power capacity and stations' occupancy limits.
arXiv Detail & Related papers (2023-05-18T14:03:47Z) - Unsupervised Optimal Power Flow Using Graph Neural Networks [172.33624307594158]
We use a graph neural network to learn a nonlinear parametrization between the power demanded and the corresponding allocation.
We show through simulations that the use of GNNs in this unsupervised learning context leads to solutions comparable to standard solvers.
arXiv Detail & Related papers (2022-10-17T17:30:09Z) - Stabilizing Voltage in Power Distribution Networks via Multi-Agent
Reinforcement Learning with Transformer [128.19212716007794]
We propose a Transformer-based Multi-Agent Actor-Critic framework (T-MAAC) to stabilize voltage in power distribution networks.
In addition, we adopt a novel auxiliary-task training process tailored to the voltage control task, which improves the sample efficiency.
arXiv Detail & Related papers (2022-06-08T07:48:42Z) - Deep Reinforcement Learning Based Multidimensional Resource Management
for Energy Harvesting Cognitive NOMA Communications [64.1076645382049]
Combination of energy harvesting (EH), cognitive radio (CR), and non-orthogonal multiple access (NOMA) is a promising solution to improve energy efficiency.
In this paper, we study the spectrum, energy, and time resource management for deterministic-CR-NOMA IoT systems.
arXiv Detail & Related papers (2021-09-17T08:55:48Z) - Efficient Representation for Electric Vehicle Charging Station
Operations using Reinforcement Learning [5.815007821143811]
We develop aggregation schemes that are based on the emergency of EV charging, namely the laxity value.
A least-laxity first (LLF) rule is adopted to consider only the total charging power of the EVCS.
In addition, we propose an equivalent state aggregation that can guarantee to attain the same optimal policy.
arXiv Detail & Related papers (2021-08-07T00:34:48Z) - Risk-Aware Energy Scheduling for Edge Computing with Microgrid: A
Multi-Agent Deep Reinforcement Learning Approach [82.6692222294594]
We study a risk-aware energy scheduling problem for a microgrid-powered MEC network.
We derive the solution by applying a multi-agent deep reinforcement learning (MADRL)-based advantage actor-critic (A3C) algorithm with shared neural networks.
arXiv Detail & Related papers (2020-02-21T02:14:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.