Reinforcement Learning for Decision-Making and Control in Power Systems:
Tutorial, Review, and Vision
- URL: http://arxiv.org/abs/2102.01168v3
- Date: Fri, 5 Feb 2021 16:44:41 GMT
- Title: Reinforcement Learning for Decision-Making and Control in Power Systems:
Tutorial, Review, and Vision
- Authors: Xin Chen, Guannan Qu, Yujie Tang, Steven Low, Na Li
- Abstract summary: reinforcement learning (RL) has attracted surging attention in recent years.
We focus on RL and aim to provide a tutorial on various RL techniques and how they can be applied to the decision-making and control in power systems.
In particular, we select three key applications, including frequency regulation, voltage control, and energy management, for illustration.
- Score: 9.363707557258175
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: With large-scale integration of renewable generation and ubiquitous
distributed energy resources (DERs), modern power systems confront a series of
new challenges in operation and control, such as growing complexity, increasing
uncertainty, and aggravating volatility. While the upside is that more and more
data are available owing to the widely-deployed smart meters, smart sensors,
and upgraded communication networks. As a result, data-driven control
techniques, especially reinforcement learning (RL), have attracted surging
attention in recent years. In this paper, we focus on RL and aim to provide a
tutorial on various RL techniques and how they can be applied to the
decision-making and control in power systems. In particular, we select three
key applications, including frequency regulation, voltage control, and energy
management, for illustration, and present the typical ways to model and tackle
them with RL methods. We conclude by emphasizing two critical issues in the
application of RL, i.e., safety and scalability. Several potential future
directions are discussed as well.
Related papers
- Non-Intrusive Electric Load Monitoring Approach Based on Current Feature
Visualization for Smart Energy Management [51.89904044860731]
We employ computer vision techniques of AI to design a non-invasive load monitoring method for smart electric energy management.
We propose to recognize all electric loads from color feature images using a U-shape deep neural network with multi-scale feature extraction and attention mechanism.
arXiv Detail & Related papers (2023-08-08T04:52:19Z) - Pervasive Machine Learning for Smart Radio Environments Enabled by
Reconfigurable Intelligent Surfaces [56.35676570414731]
The emerging technology of Reconfigurable Intelligent Surfaces (RISs) is provisioned as an enabler of smart wireless environments.
RISs offer a highly scalable, low-cost, hardware-efficient, and almost energy-neutral solution for dynamic control of the propagation of electromagnetic signals over the wireless medium.
One of the major challenges with the envisioned dense deployment of RISs in such reconfigurable radio environments is the efficient configuration of multiple metasurfaces.
arXiv Detail & Related papers (2022-05-08T06:21:33Z) - Reinforcement Learning-Empowered Mobile Edge Computing for 6G Edge
Intelligence [76.96698721128406]
Mobile edge computing (MEC) considered a novel paradigm for computation and delay-sensitive tasks in fifth generation (5G) networks and beyond.
This paper provides a comprehensive research review on free-enabled RL and offers insight for development.
arXiv Detail & Related papers (2022-01-27T10:02:54Z) - Multi-Agent Reinforcement Learning for Active Voltage Control on Power
Distribution Networks [2.992389186393994]
The emerging trend of decarbonisation is placing excessive stress on power distribution networks.
Active voltage control is seen as a promising solution to relieve power congestion and improve voltage quality without extra hardware investment.
This paper formulates the active voltage control problem in the framework of Dec-POMDP and establishes an open-source environment.
arXiv Detail & Related papers (2021-10-27T09:31:22Z) - Federated Learning over Wireless IoT Networks with Optimized
Communication and Resources [98.18365881575805]
Federated learning (FL) as a paradigm of collaborative learning techniques has obtained increasing research attention.
It is of interest to investigate fast responding and accurate FL schemes over wireless systems.
We show that the proposed communication-efficient federated learning framework converges at a strong linear rate.
arXiv Detail & Related papers (2021-10-22T13:25:57Z) - Improving Robustness of Reinforcement Learning for Power System Control
with Adversarial Training [71.7750435554693]
We show that several state-of-the-art RL agents proposed for power system control are vulnerable to adversarial attacks.
Specifically, we use an adversary Markov Decision Process to learn an attack policy, and demonstrate the potency of our attack.
We propose to use adversarial training to increase the robustness of RL agent against attacks and avoid infeasible operational decisions.
arXiv Detail & Related papers (2021-10-18T00:50:34Z) - Distributed Learning in Wireless Networks: Recent Progress and Future
Challenges [170.35951727508225]
Next-generation wireless networks will enable many machine learning (ML) tools and applications to analyze various types of data collected by edge devices.
Distributed learning and inference techniques have been proposed as a means to enable edge devices to collaboratively train ML models without raw data exchanges.
This paper provides a comprehensive study of how distributed learning can be efficiently and effectively deployed over wireless edge networks.
arXiv Detail & Related papers (2021-04-05T20:57:56Z) - Learning and Fast Adaptation for Grid Emergency Control via Deep Meta
Reinforcement Learning [22.58070790887177]
Power systems are undergoing a significant transformation with more uncertainties, less inertia and closer to operation limits.
There is an imperative need to enhance grid emergency control to maintain system reliability and security.
Great progress has been made in developing deep reinforcement learning (DRL) based grid control solutions in recent years.
Existing DRL-based solutions have two main limitations: 1) they cannot handle well with a wide range of grid operation conditions, system parameters, and contingencies; 2) they generally lack the ability to fast adapt to new grid operation conditions, system parameters, and contingencies, limiting their applicability for real-world applications.
arXiv Detail & Related papers (2021-01-13T19:45:59Z) - Rethink AI-based Power Grid Control: Diving Into Algorithm Design [6.194042945960622]
In this paper, we present an in-depth analysis of DRL-based voltage control fromaspects of algorithm selection, state space representation, and reward engineering.
We propose a novel imitation learning-based approachto directly map power grid operating points to effective actions without any interimreinforcement learning process.
arXiv Detail & Related papers (2020-12-23T23:38:41Z) - Machine Learning in Event-Triggered Control: Recent Advances and Open
Issues [0.7699714865575188]
This article reviews the literature on the use of machine learning in combination with event-triggered control.
We discuss how these learning algorithms can be used for different applications depending on the purpose of the machine learning use.
arXiv Detail & Related papers (2020-09-27T08:11:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.