Safety Constrained Multi-Agent Reinforcement Learning for Active Voltage Control
- URL: http://arxiv.org/abs/2405.08443v2
- Date: Tue, 3 Sep 2024 04:10:57 GMT
- Title: Safety Constrained Multi-Agent Reinforcement Learning for Active Voltage Control
- Authors: Yang Qu, Jinming Ma, Feng Wu,
- Abstract summary: We formalize the active voltage control problem as a constrained Markov game and propose a safety-constrained MARL algorithm.
We evaluate our approach in the power distribution network simulation environment with real-world scale scenarios.
- Score: 34.95810473913879
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Active voltage control presents a promising avenue for relieving power congestion and enhancing voltage quality, taking advantage of the distributed controllable generators in the power network, such as roof-top photovoltaics. While Multi-Agent Reinforcement Learning (MARL) has emerged as a compelling approach to address this challenge, existing MARL approaches tend to overlook the constrained optimization nature of this problem, failing in guaranteeing safety constraints. In this paper, we formalize the active voltage control problem as a constrained Markov game and propose a safety-constrained MARL algorithm. We expand the primal-dual optimization RL method to multi-agent settings, and augment it with a novel approach of double safety estimation to learn the policy and to update the Lagrange-multiplier. In addition, we proposed different cost functions and investigated their influences on the behavior of our constrained MARL method. We evaluate our approach in the power distribution network simulation environment with real-world scale scenarios. Experimental results demonstrate the effectiveness of the proposed method compared with the state-of-the-art MARL methods. This paper is published at \url{https://www.ijcai.org/Proceedings/2024/}.
Related papers
- Augmented Lagrangian-Based Safe Reinforcement Learning Approach for Distribution System Volt/VAR Control [1.1059341532498634]
This paper formulates the Volt- VAR control problem as a constrained Markov decision process (CMDP)
A novel safe off-policy reinforcement learning (RL) approach is proposed in this paper to solve the CMDP.
A two-stage strategy is adopted for offline training and online execution, so the accurate distribution system model is no longer needed.
arXiv Detail & Related papers (2024-10-19T19:45:09Z) - A novel ANROA based control approach for grid-tied multi-functional
solar energy conversion system [0.0]
An adaptive control approach for a three-phase grid-interfaced solar photovoltaic system is proposed and discussed.
This method incorporates an Adaptive Neuro-fuzzy Inference System (ANFIS) with a Rain Optimization Algorithm (ROA)
Avoiding power quality problems including voltage fluctuations, harmonics, and flickers as well as unbalanced loads and reactive power usage is the major goal.
arXiv Detail & Related papers (2024-01-26T09:12:39Z) - AlberDICE: Addressing Out-Of-Distribution Joint Actions in Offline
Multi-Agent RL via Alternating Stationary Distribution Correction Estimation [65.4532392602682]
One of the main challenges in offline Reinforcement Learning (RL) is the distribution shift that arises from the learned policy deviating from the data collection policy.
This is often addressed by avoiding out-of-distribution (OOD) actions during policy improvement as their presence can lead to substantial performance degradation.
We introduce AlberDICE, an offline MARL algorithm that performs centralized training of individual agents based on stationary distribution optimization.
arXiv Detail & Related papers (2023-11-03T18:56:48Z) - Robust Multi-Agent Reinforcement Learning via Adversarial
Regularization: Theoretical Foundation and Stable Algorithms [79.61176746380718]
Multi-Agent Reinforcement Learning (MARL) has shown promising results across several domains.
MARL policies often lack robustness and are sensitive to small changes in their environment.
We show that we can gain robustness by controlling a policy's Lipschitz constant.
We propose a new robust MARL framework, ERNIE, that promotes the Lipschitz continuity of the policies.
arXiv Detail & Related papers (2023-10-16T20:14:06Z) - Stabilizing Voltage in Power Distribution Networks via Multi-Agent
Reinforcement Learning with Transformer [128.19212716007794]
We propose a Transformer-based Multi-Agent Actor-Critic framework (T-MAAC) to stabilize voltage in power distribution networks.
In addition, we adopt a novel auxiliary-task training process tailored to the voltage control task, which improves the sample efficiency.
arXiv Detail & Related papers (2022-06-08T07:48:42Z) - Adversarially Robust Learning for Security-Constrained Optimal Power
Flow [55.816266355623085]
We tackle the problem of N-k security-constrained optimal power flow (SCOPF)
N-k SCOPF is a core problem for the operation of electrical grids.
Inspired by methods in adversarially robust training, we frame N-k SCOPF as a minimax optimization problem.
arXiv Detail & Related papers (2021-11-12T22:08:10Z) - Multi-Agent Reinforcement Learning for Active Voltage Control on Power
Distribution Networks [2.992389186393994]
The emerging trend of decarbonisation is placing excessive stress on power distribution networks.
Active voltage control is seen as a promising solution to relieve power congestion and improve voltage quality without extra hardware investment.
This paper formulates the active voltage control problem in the framework of Dec-POMDP and establishes an open-source environment.
arXiv Detail & Related papers (2021-10-27T09:31:22Z) - Strictly Batch Imitation Learning by Energy-based Distribution Matching [104.33286163090179]
Consider learning a policy purely on the basis of demonstrated behavior -- that is, with no access to reinforcement signals, no knowledge of transition dynamics, and no further interaction with the environment.
One solution is simply to retrofit existing algorithms for apprenticeship learning to work in the offline setting.
But such an approach leans heavily on off-policy evaluation or offline model estimation, and can be indirect and inefficient.
We argue that a good solution should be able to explicitly parameterize a policy, implicitly learn from rollout dynamics, and operate in an entirely offline fashion.
arXiv Detail & Related papers (2020-06-25T03:27:59Z) - Two-stage Deep Reinforcement Learning for Inverter-based Volt-VAR
Control in Active Distribution Networks [3.260913246106564]
We propose a novel two-stage deep reinforcement learning (DRL) method to improve the voltage profile by regulating inverter-based energy resources.
In the offline stage, a highly efficient adversarial reinforcement learning algorithm is developed to train an offline agent robust to the model mismatch.
In the sequential online stage, we transfer the offline agent safely as the online agent to perform continuous learning and controlling online with significantly improved safety and efficiency.
arXiv Detail & Related papers (2020-05-20T08:02:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.