Bayesian Reinforcement Learning for Automatic Voltage Control under
Cyber-Induced Uncertainty
- URL: http://arxiv.org/abs/2305.16469v1
- Date: Thu, 25 May 2023 20:58:08 GMT
- Title: Bayesian Reinforcement Learning for Automatic Voltage Control under
Cyber-Induced Uncertainty
- Authors: Abhijeet Sahu and Katherine Davis
- Abstract summary: This work introduces a Bayesian Reinforcement Learning (BRL) approach for power system control problems.
It focuses on sustained voltage control under uncertainty in a cyber-adversarial environment.
BRL techniques assist in automatically finding a threshold for exploration and exploitation in various RL techniques.
- Score: 0.533024001730262
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Voltage control is crucial to large-scale power system reliable operation, as
timely reactive power support can help prevent widespread outages. However,
there is currently no built in mechanism for power systems to ensure that the
voltage control objective to maintain reliable operation will survive or
sustain the uncertainty caused under adversary presence. Hence, this work
introduces a Bayesian Reinforcement Learning (BRL) approach for power system
control problems, with focus on sustained voltage control under uncertainty in
a cyber-adversarial environment. This work proposes a data-driven BRL-based
approach for automatic voltage control by formulating and solving a
Partially-Observable Markov Decision Problem (POMDP), where the states are
partially observable due to cyber intrusions. The techniques are evaluated on
the WSCC and IEEE 14 bus systems. Additionally, BRL techniques assist in
automatically finding a threshold for exploration and exploitation in various
RL techniques.
Related papers
- Robust Deep Reinforcement Learning for Inverter-based Volt-Var Control in Partially Observable Distribution Networks [11.073055284983626]
Key issue in DRL-based approaches is the limited measurement deployment in active distribution networks.
To address those problems, this paper proposes a robust DRL approach with a conservative critic and a surrogate reward.
arXiv Detail & Related papers (2024-08-13T10:02:10Z) - Distributed Energy Management and Demand Response in Smart Grids: A
Multi-Agent Deep Reinforcement Learning Framework [53.97223237572147]
This paper presents a multi-agent Deep Reinforcement Learning (DRL) framework for autonomous control and integration of renewable energy resources into smart power grid systems.
In particular, the proposed framework jointly considers demand response (DR) and distributed energy management (DEM) for residential end-users.
arXiv Detail & Related papers (2022-11-29T01:18:58Z) - Stabilizing Voltage in Power Distribution Networks via Multi-Agent
Reinforcement Learning with Transformer [128.19212716007794]
We propose a Transformer-based Multi-Agent Actor-Critic framework (T-MAAC) to stabilize voltage in power distribution networks.
In addition, we adopt a novel auxiliary-task training process tailored to the voltage control task, which improves the sample efficiency.
arXiv Detail & Related papers (2022-06-08T07:48:42Z) - Contingency-constrained economic dispatch with safe reinforcement learning [7.133681867718039]
Reinforcement-learning based (RL) controllers can address this challenge, but cannot themselves provide safety guarantees.
We propose a formally validated RL controller for economic dispatch.
We extend conventional constraints by a time-dependent constraint encoding the islanding contingency.
Unsafe actions are projected into the safe action space while leveraging constrained zonotope set representations for computational efficiency.
arXiv Detail & Related papers (2022-05-12T16:52:48Z) - Safe Reinforcement Learning for Grid Voltage Control [0.0]
Under voltage load shedding has been considered as a standard approach to recover the voltage stability of the electric power grid under emergency conditions.
In this paper, we discuss a couple of novel safe RL approaches, namely constrained optimization approach and Barrier function-based approach.
arXiv Detail & Related papers (2021-12-02T18:34:50Z) - Multi-Agent Reinforcement Learning for Active Voltage Control on Power
Distribution Networks [2.992389186393994]
The emerging trend of decarbonisation is placing excessive stress on power distribution networks.
Active voltage control is seen as a promising solution to relieve power congestion and improve voltage quality without extra hardware investment.
This paper formulates the active voltage control problem in the framework of Dec-POMDP and establishes an open-source environment.
arXiv Detail & Related papers (2021-10-27T09:31:22Z) - Improving Robustness of Reinforcement Learning for Power System Control
with Adversarial Training [71.7750435554693]
We show that several state-of-the-art RL agents proposed for power system control are vulnerable to adversarial attacks.
Specifically, we use an adversary Markov Decision Process to learn an attack policy, and demonstrate the potency of our attack.
We propose to use adversarial training to increase the robustness of RL agent against attacks and avoid infeasible operational decisions.
arXiv Detail & Related papers (2021-10-18T00:50:34Z) - Learning Robust Hybrid Control Barrier Functions for Uncertain Systems [68.30783663518821]
We propose robust hybrid control barrier functions as a means to synthesize control laws that ensure robust safety.
Based on this notion, we formulate an optimization problem for learning robust hybrid control barrier functions from data.
Our techniques allow us to safely expand the region of attraction of a compass gait walker that is subject to model uncertainty.
arXiv Detail & Related papers (2021-01-16T17:53:35Z) - Enforcing robust control guarantees within neural network policies [76.00287474159973]
We propose a generic nonlinear control policy class, parameterized by neural networks, that enforces the same provable robustness criteria as robust control.
We demonstrate the power of this approach on several domains, improving in average-case performance over existing robust control methods and in worst-case stability over (non-robust) deep RL methods.
arXiv Detail & Related papers (2020-11-16T17:14:59Z) - Multi-Agent Meta-Reinforcement Learning for Self-Powered and Sustainable
Edge Computing Systems [87.4519172058185]
An effective energy dispatch mechanism for self-powered wireless networks with edge computing capabilities is studied.
A novel multi-agent meta-reinforcement learning (MAMRL) framework is proposed to solve the formulated problem.
Experimental results show that the proposed MAMRL model can reduce up to 11% non-renewable energy usage and by 22.4% the energy cost.
arXiv Detail & Related papers (2020-02-20T04:58:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.