Lyapunov-Regularized Reinforcement Learning for Power System Transient
Stability
- URL: http://arxiv.org/abs/2103.03869v1
- Date: Fri, 5 Mar 2021 18:55:26 GMT
- Title: Lyapunov-Regularized Reinforcement Learning for Power System Transient
Stability
- Authors: Wenqi Cui, Baosen Zhang
- Abstract summary: This paper proposes a Lyapunov regularized RL approach for optimal frequency control for transient stability in lossy networks.
Case study shows that introducing the Lyapunov regularization enables the controller to be stabilizing and achieve smaller losses.
- Score: 5.634825161148484
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Transient stability of power systems is becoming increasingly important
because of the growing integration of renewable resources. These resources lead
to a reduction in mechanical inertia but also provide increased flexibility in
frequency responses. Namely, their power electronic interfaces can implement
almost arbitrary control laws. To design these controllers, reinforcement
learning (RL) has emerged as a powerful method in searching for optimal
non-linear control policy parameterized by neural networks.
A key challenge is to enforce that a learned controller must be stabilizing.
This paper proposes a Lyapunov regularized RL approach for optimal frequency
control for transient stability in lossy networks. Because the lack of an
analytical Lyapunov function, we learn a Lyapunov function parameterized by a
neural network. The losses are specially designed with respect to the physical
power system. The learned neural Lyapunov function is then utilized as a
regularization to train the neural network controller by penalizing actions
that violate the Lyapunov conditions. Case study shows that introducing the
Lyapunov regularization enables the controller to be stabilizing and achieve
smaller losses.
Related papers
- Formally Verified Physics-Informed Neural Control Lyapunov Functions [4.2162963332651575]
Control Lyapunov functions are a central tool in the design and analysis of stabilizing controllers for nonlinear systems.
In this paper, we investigate physics-informed learning and formal verification of neural network control Lyapunov functions.
arXiv Detail & Related papers (2024-09-30T17:27:56Z) - Lyapunov-stable Neural Control for State and Output Feedback: A Novel Formulation [67.63756749551924]
Learning-based neural network (NN) control policies have shown impressive empirical performance in a wide range of tasks in robotics and control.
Lyapunov stability guarantees over the region-of-attraction (ROA) for NN controllers with nonlinear dynamical systems are challenging to obtain.
We demonstrate a new framework for learning NN controllers together with Lyapunov certificates using fast empirical falsification and strategic regularizations.
arXiv Detail & Related papers (2024-04-11T17:49:15Z) - Improving the Performance of Robust Control through Event-Triggered
Learning [74.57758188038375]
We propose an event-triggered learning algorithm that decides when to learn in the face of uncertainty in the LQR problem.
We demonstrate improved performance over a robust controller baseline in a numerical example.
arXiv Detail & Related papers (2022-07-28T17:36:37Z) - The least-control principle for learning at equilibrium [65.2998274413952]
We present a new principle for learning equilibrium recurrent neural networks, deep equilibrium models, or meta-learning.
Our results shed light on how the brain might learn and offer new ways of approaching a broad class of machine learning problems.
arXiv Detail & Related papers (2022-07-04T11:27:08Z) - Neural Lyapunov Control of Unknown Nonlinear Systems with Stability
Guarantees [4.786698731084036]
We propose a learning framework to stabilize an unknown nonlinear system with a neural controller and learn a neural Lyapunov function.
We provide theoretical guarantees of the proposed learning framework in terms of the closed-loop stability for the unknown nonlinear system.
arXiv Detail & Related papers (2022-06-04T05:57:31Z) - Adversarially Regularized Policy Learning Guided by Trajectory
Optimization [31.122262331980153]
We propose adVErsarially Regularized pOlicy learNIng guided by trajeCtory optimizAtion (VERONICA) for learning smooth control policies.
Our proposed approach improves the sample efficiency of neural policy learning and enhances the robustness of the policy against various types of disturbances.
arXiv Detail & Related papers (2021-09-16T00:02:11Z) - Regularizing Action Policies for Smooth Control with Reinforcement
Learning [47.312768123967025]
Conditioning for Action Policy Smoothness (CAPS) is an effective yet intuitive regularization on action policies.
CAPS offers consistent improvement in the smoothness of the learned state-to-action mappings of neural network controllers.
Tested on a real system, improvements in controller smoothness on a quadrotor drone resulted in an almost 80% reduction in power consumption.
arXiv Detail & Related papers (2020-12-11T21:35:24Z) - Enforcing robust control guarantees within neural network policies [76.00287474159973]
We propose a generic nonlinear control policy class, parameterized by neural networks, that enforces the same provable robustness criteria as robust control.
We demonstrate the power of this approach on several domains, improving in average-case performance over existing robust control methods and in worst-case stability over (non-robust) deep RL methods.
arXiv Detail & Related papers (2020-11-16T17:14:59Z) - Improve Generalization and Robustness of Neural Networks via Weight
Scale Shifting Invariant Regularizations [52.493315075385325]
We show that a family of regularizers, including weight decay, is ineffective at penalizing the intrinsic norms of weights for networks with homogeneous activation functions.
We propose an improved regularizer that is invariant to weight scale shifting and thus effectively constrains the intrinsic norm of a neural network.
arXiv Detail & Related papers (2020-08-07T02:55:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.