Deep Reinforcement Learning for Electric Transmission Voltage Control
- URL: http://arxiv.org/abs/2006.06728v2
- Date: Fri, 16 Oct 2020 03:05:43 GMT
- Title: Deep Reinforcement Learning for Electric Transmission Voltage Control
- Authors: Brandon L. Thayer and Thomas J. Overbye
- Abstract summary: A subset of machine learning known as deep reinforcement learning (DRL) has recently shown promise in performing tasks typically performed by humans.
This paper applies DRL to the transmission voltage control problem, presents open-source DRL environments for voltage control, and performs experiments at scale with systems up to 500 buses.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Today, human operators primarily perform voltage control of the electric
transmission system. As the complexity of the grid increases, so does its
operation, suggesting additional automation could be beneficial. A subset of
machine learning known as deep reinforcement learning (DRL) has recently shown
promise in performing tasks typically performed by humans. This paper applies
DRL to the transmission voltage control problem, presents open-source DRL
environments for voltage control, proposes a novel modification to the "deep Q
network" (DQN) algorithm, and performs experiments at scale with systems up to
500 buses. The promise of applying DRL to voltage control is demonstrated,
though more research is needed to enable DRL-based techniques to consistently
outperform conventional methods.
Related papers
- Robust Deep Reinforcement Learning for Inverter-based Volt-Var Control in Partially Observable Distribution Networks [11.073055284983626]
Key issue in DRL-based approaches is the limited measurement deployment in active distribution networks.
To address those problems, this paper proposes a robust DRL approach with a conservative critic and a surrogate reward.
arXiv Detail & Related papers (2024-08-13T10:02:10Z) - Compressing Deep Reinforcement Learning Networks with a Dynamic
Structured Pruning Method for Autonomous Driving [63.155562267383864]
Deep reinforcement learning (DRL) has shown remarkable success in complex autonomous driving scenarios.
DRL models inevitably bring high memory consumption and computation, which hinders their wide deployment in resource-limited autonomous driving devices.
We introduce a novel dynamic structured pruning approach that gradually removes a DRL model's unimportant neurons during the training stage.
arXiv Detail & Related papers (2024-02-07T09:00:30Z) - Digital Twin Assisted Deep Reinforcement Learning for Online Admission
Control in Sliced Network [19.152875040151976]
We propose a digital twin (DT) accelerated DRL solution to address this issue.
A neural network-based DT is established with a customized output layer for queuing systems, trained through supervised learning, and then employed to assist the training phase of the DRL model.
Extensive simulations show that the DT-accelerated DRL improves resource utilization by over 40% compared to the directly trained state-of-the-art dueling deep Q-learning model.
arXiv Detail & Related papers (2023-10-07T09:09:19Z) - Real-Time Model-Free Deep Reinforcement Learning for Force Control of a
Series Elastic Actuator [56.11574814802912]
State-of-the art robotic applications utilize series elastic actuators (SEAs) with closed-loop force control to achieve complex tasks such as walking, lifting, and manipulation.
Model-free PID control methods are more prone to instability due to nonlinearities in the SEA.
Deep reinforcement learning has proved to be an effective model-free method for continuous control tasks.
arXiv Detail & Related papers (2023-04-11T00:51:47Z) - On Transforming Reinforcement Learning by Transformer: The Development
Trajectory [97.79247023389445]
Transformer, originally devised for natural language processing, has also attested significant success in computer vision.
We group existing developments in two categories: architecture enhancement and trajectory optimization.
We examine the main applications of TRL in robotic manipulation, text-based games, navigation and autonomous driving.
arXiv Detail & Related papers (2022-12-29T03:15:59Z) - Reinforcement Learning for Resilient Power Grids [0.23204178451683263]
Traditional power grid systems have become obsolete under more frequent and extreme natural disasters.
Most power grid simulators and RL interfaces do not support simulation of power grid under large-scale blackouts or when the network is divided into sub-networks.
In this study, we proposed an updated power grid simulator built on Grid2Op, an existing simulator and RL interface, and experimented on limiting the action and observation spaces of Grid2Op.
arXiv Detail & Related papers (2022-12-08T04:40:14Z) - Efficient Learning of Voltage Control Strategies via Model-based Deep
Reinforcement Learning [9.936452412191326]
This article proposes a model-based deep reinforcement learning (DRL) method to design emergency control strategies for short-term voltage stability problems in power systems.
Recent advances show promising results in model-free DRL-based methods for power systems, but model-free methods suffer from poor sample efficiency and training time.
We propose a novel model-based-DRL framework where a deep neural network (DNN)-based dynamic surrogate model is utilized with the policy learning framework.
arXiv Detail & Related papers (2022-12-06T02:50:53Z) - Stabilizing Voltage in Power Distribution Networks via Multi-Agent
Reinforcement Learning with Transformer [128.19212716007794]
We propose a Transformer-based Multi-Agent Actor-Critic framework (T-MAAC) to stabilize voltage in power distribution networks.
In addition, we adopt a novel auxiliary-task training process tailored to the voltage control task, which improves the sample efficiency.
arXiv Detail & Related papers (2022-06-08T07:48:42Z) - Automated Reinforcement Learning (AutoRL): A Survey and Open Problems [92.73407630874841]
Automated Reinforcement Learning (AutoRL) involves not only standard applications of AutoML but also includes additional challenges unique to RL.
We provide a common taxonomy, discuss each area in detail and pose open problems which would be of interest to researchers going forward.
arXiv Detail & Related papers (2022-01-11T12:41:43Z) - RL-DARTS: Differentiable Architecture Search for Reinforcement Learning [62.95469460505922]
We introduce RL-DARTS, one of the first applications of Differentiable Architecture Search (DARTS) in reinforcement learning (RL)
By replacing the image encoder with a DARTS supernet, our search method is sample-efficient, requires minimal extra compute resources, and is also compatible with off-policy and on-policy RL algorithms, needing only minor changes in preexisting code.
We show that the supernet gradually learns better cells, leading to alternative architectures which can be highly competitive against manually designed policies, but also verify previous design choices for RL policies.
arXiv Detail & Related papers (2021-06-04T03:08:43Z) - Scalable Voltage Control using Structure-Driven Hierarchical Deep
Reinforcement Learning [0.0]
This paper presents a novel hierarchical deep reinforcement learning (DRL) based design for the voltage control of power grids.
We exploit the area-wise division structure of the power system to propose a hierarchical DRL design that can be scaled to the larger grid models.
We train area-wise decentralized RL agents to compute lower-level policies for the individual areas, and concurrently train a higher-level DRL agent that uses the updates of the lower-level policies to efficiently coordinate the control actions taken by the lower-level agents.
arXiv Detail & Related papers (2021-01-29T21:30:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.