Physics-informed Evolutionary Strategy based Control for Mitigating
Delayed Voltage Recovery
- URL: http://arxiv.org/abs/2111.14352v1
- Date: Mon, 29 Nov 2021 07:12:40 GMT
- Title: Physics-informed Evolutionary Strategy based Control for Mitigating
Delayed Voltage Recovery
- Authors: Yan Du, Qiuhua Huang, Renke Huang, Tianzhixi Yin, Jie Tan, Wenhao Yu,
Xinya Li
- Abstract summary: We propose a novel data-driven, real-time power system voltage control method based on the physics-informed guided meta evolutionary strategy (ES)
The main objective is to quickly provide an adaptive control strategy to mitigate the fault-induced delayed voltage recovery (FIDVR) problem.
- Score: 14.44961822756759
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: In this work we propose a novel data-driven, real-time power system voltage
control method based on the physics-informed guided meta evolutionary strategy
(ES). The main objective is to quickly provide an adaptive control strategy to
mitigate the fault-induced delayed voltage recovery (FIDVR) problem.
Reinforcement learning methods have been developed for the same or similar
challenging control problems, but they suffer from training inefficiency and
lack of robustness for "corner or unseen" scenarios. On the other hand,
extensive physical knowledge has been developed in power systems but little has
been leveraged in learning-based approaches. To address these challenges, we
introduce the trainable action mask technique for flexibly embedding physical
knowledge into RL models to rule out unnecessary or unfavorable actions, and
achieve notable improvements in sample efficiency, control performance and
robustness. Furthermore, our method leverages past learning experience to
derive surrogate gradient to guide and accelerate the exploration process in
training. Case studies on the IEEE 300-bus system and comparisons with other
state-of-the-art benchmark methods demonstrate effectiveness and advantages of
our method.
Related papers
- Unlearning with Control: Assessing Real-world Utility for Large Language Model Unlearning [97.2995389188179]
Recent research has begun to approach large language models (LLMs) unlearning via gradient ascent (GA)
Despite their simplicity and efficiency, we suggest that GA-based methods face the propensity towards excessive unlearning.
We propose several controlling methods that can regulate the extent of excessive unlearning.
arXiv Detail & Related papers (2024-06-13T14:41:00Z) - Power Transformer Fault Prediction Based on Knowledge Graphs [9.690455133923667]
The scarcity of extensive fault data makes it difficult to apply machine learning techniques effectively.
We propose a novel approach that leverages the knowledge graph (KG) technology in combination with gradient boosting decision trees (GBDT)
This method is designed to efficiently learn from a small set of high-dimensional data, integrating various factors influencing transformer faults and historical operational data.
arXiv Detail & Related papers (2024-02-11T19:14:28Z) - PMU measurements based short-term voltage stability assessment of power
systems via deep transfer learning [2.1303885995425635]
This paper proposes a novel phasor measurement unit (PMU) measurements-based STVSA method by using deep transfer learning.
It employs temporal ensembling for sample labeling and utilizes least squares generative adversarial networks (LSGAN) for data augmentation, enabling effective deep learning on small-scale datasets.
Experimental results on the IEEE 39-bus test system demonstrate that the proposed method improves model evaluation accuracy by approximately 20% through transfer learning.
arXiv Detail & Related papers (2023-08-07T23:44:35Z) - Efficient Deep Reinforcement Learning Requires Regulating Overfitting [91.88004732618381]
We show that high temporal-difference (TD) error on the validation set of transitions is the main culprit that severely affects the performance of deep RL algorithms.
We show that a simple online model selection method that targets the validation TD error is effective across state-based DMC and Gym tasks.
arXiv Detail & Related papers (2023-04-20T17:11:05Z) - Efficient Learning of Voltage Control Strategies via Model-based Deep
Reinforcement Learning [9.936452412191326]
This article proposes a model-based deep reinforcement learning (DRL) method to design emergency control strategies for short-term voltage stability problems in power systems.
Recent advances show promising results in model-free DRL-based methods for power systems, but model-free methods suffer from poor sample efficiency and training time.
We propose a novel model-based-DRL framework where a deep neural network (DNN)-based dynamic surrogate model is utilized with the policy learning framework.
arXiv Detail & Related papers (2022-12-06T02:50:53Z) - Guaranteed Conservation of Momentum for Learning Particle-based Fluid
Dynamics [96.9177297872723]
We present a novel method for guaranteeing linear momentum in learned physics simulations.
We enforce conservation of momentum with a hard constraint, which we realize via antisymmetrical continuous convolutional layers.
In combination, the proposed method allows us to increase the physical accuracy of the learned simulator substantially.
arXiv Detail & Related papers (2022-10-12T09:12:59Z) - Accelerated Policy Learning with Parallel Differentiable Simulation [59.665651562534755]
We present a differentiable simulator and a new policy learning algorithm (SHAC)
Our algorithm alleviates problems with local minima through a smooth critic function.
We show substantial improvements in sample efficiency and wall-clock time over state-of-the-art RL and differentiable simulation-based algorithms.
arXiv Detail & Related papers (2022-04-14T17:46:26Z) - Improving Robustness of Reinforcement Learning for Power System Control
with Adversarial Training [71.7750435554693]
We show that several state-of-the-art RL agents proposed for power system control are vulnerable to adversarial attacks.
Specifically, we use an adversary Markov Decision Process to learn an attack policy, and demonstrate the potency of our attack.
We propose to use adversarial training to increase the robustness of RL agent against attacks and avoid infeasible operational decisions.
arXiv Detail & Related papers (2021-10-18T00:50:34Z) - Rethink AI-based Power Grid Control: Diving Into Algorithm Design [6.194042945960622]
In this paper, we present an in-depth analysis of DRL-based voltage control fromaspects of algorithm selection, state space representation, and reward engineering.
We propose a novel imitation learning-based approachto directly map power grid operating points to effective actions without any interimreinforcement learning process.
arXiv Detail & Related papers (2020-12-23T23:38:41Z) - Fault-Tolerant Control of Degrading Systems with On-Policy Reinforcement
Learning [1.8799681615947088]
We propose a novel adaptive reinforcement learning control approach for fault tolerant systems.
Online and offline learning are combined to improve exploration and sample efficiency.
We conduct experiments on an aircraft fuel transfer system to demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2020-08-10T20:42:59Z) - Reinforcement Learning with Fast Stabilization in Linear Dynamical
Systems [91.43582419264763]
We study model-based reinforcement learning (RL) in unknown stabilizable linear dynamical systems.
We propose an algorithm that certifies fast stabilization of the underlying system by effectively exploring the environment.
We show that the proposed algorithm attains $tildemathcalO(sqrtT)$ regret after $T$ time steps of agent-environment interaction.
arXiv Detail & Related papers (2020-07-23T23:06:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.