A Graph Policy Network Approach for Volt-Var Control in Power
Distribution Systems
- URL: http://arxiv.org/abs/2109.12073v1
- Date: Fri, 24 Sep 2021 16:55:41 GMT
- Title: A Graph Policy Network Approach for Volt-Var Control in Power
Distribution Systems
- Authors: Xian Yeow Lee, Soumik Sarkar, Yubo Wang
- Abstract summary: Volt-var control (VVC) is the problem of operating power distribution systems within healthy regimes by controlling actuators in power systems.
We propose a framework that combines RL with graph networks and study the benefits and limitations of graph-based policy.
- Score: 11.196936903669386
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Volt-var control (VVC) is the problem of operating power distribution systems
within healthy regimes by controlling actuators in power systems. Existing
works have mostly adopted the conventional routine of representing the power
systems (a graph with tree topology) as vectors to train deep reinforcement
learning (RL) policies. We propose a framework that combines RL with graph
neural networks and study the benefits and limitations of graph-based policy in
the VVC setting. Our results show that graph-based policies converge to the
same rewards asymptotically however at a slower rate when compared to vector
representation counterpart. We conduct further analysis on the impact of both
observations and actions: on the observation end, we examine the robustness of
graph-based policy on two typical data acquisition errors in power systems,
namely sensor communication failure and measurement misalignment. On the action
end, we show that actuators have various impacts on the system, thus using a
graph representation induced by power systems topology may not be the optimal
choice. In the end, we conduct a case study to demonstrate that the choice of
readout function architecture and graph augmentation can further improve
training performance and robustness.
Related papers
- Graph Attention Inference of Network Topology in Multi-Agent Systems [0.0]
Our work introduces a novel machine learning-based solution that leverages the attention mechanism to predict future states of multi-agent systems.
The graph structure is then inferred from the strength of the attention values.
Our results demonstrate that the presented data-driven graph attention machine learning model can identify the network topology in multi-agent systems.
arXiv Detail & Related papers (2024-08-27T23:58:51Z) - Graph Neural Networks on Factor Graphs for Robust, Fast, and Scalable
Linear State Estimation with PMUs [1.1470070927586016]
We present a method that uses graph neural networks (GNNs) to learn complex bus voltage estimates from PMU voltage and current measurements.
We propose an original implementation of GNNs over the power system's factor graph to simplify the integration of various types and quantities of measurements.
This model is highly efficient and scalable, as its computational complexity is linear with respect to the number of nodes in the power system.
arXiv Detail & Related papers (2023-04-28T08:17:52Z) - GIF: A General Graph Unlearning Strategy via Influence Function [63.52038638220563]
Graph Influence Function (GIF) is a model-agnostic unlearning method that can efficiently and accurately estimate parameter changes in response to a $epsilon$-mass perturbation in deleted data.
We conduct extensive experiments on four representative GNN models and three benchmark datasets to justify GIF's superiority in terms of unlearning efficacy, model utility, and unlearning efficiency.
arXiv Detail & Related papers (2023-04-06T03:02:54Z) - Graph Decision Transformer [83.76329715043205]
Graph Decision Transformer (GDT) is a novel offline reinforcement learning approach.
GDT models the input sequence into a causal graph to capture potential dependencies between fundamentally different concepts.
Our experiments show that GDT matches or surpasses the performance of state-of-the-art offline RL methods on image-based Atari and OpenAI Gym.
arXiv Detail & Related papers (2023-03-07T09:10:34Z) - Let Invariant Rationale Discovery Inspire Graph Contrastive Learning [98.10268114789775]
We argue that a high-performing augmentation should preserve the salient semantics of anchor graphs regarding instance-discrimination.
We propose a new framework, Rationale-aware Graph Contrastive Learning (RGCL)
RGCL uses a rationale generator to reveal salient features about graph instance-discrimination as the rationale, and then creates rationale-aware views for contrastive learning.
arXiv Detail & Related papers (2022-06-16T01:28:40Z) - Robust and Fast Data-Driven Power System State Estimator Using Graph
Neural Networks [1.2891210250935146]
We present a method for training a model based on graph neural networks (GNNs) to learn estimates from PMU voltage and current measurements.
We propose an original GNN implementation over the power system's factor graph to simplify the incorporation of various types and numbers of measurements.
arXiv Detail & Related papers (2022-06-06T16:40:54Z) - Learning Graph Structure from Convolutional Mixtures [119.45320143101381]
We propose a graph convolutional relationship between the observed and latent graphs, and formulate the graph learning task as a network inverse (deconvolution) problem.
In lieu of eigendecomposition-based spectral methods, we unroll and truncate proximal gradient iterations to arrive at a parameterized neural network architecture that we call a Graph Deconvolution Network (GDN)
GDNs can learn a distribution of graphs in a supervised fashion, perform link prediction or edge-weight regression tasks by adapting the loss function, and they are inherently inductive.
arXiv Detail & Related papers (2022-05-19T14:08:15Z) - Solving AC Power Flow with Graph Neural Networks under Realistic
Constraints [3.114162328765758]
We propose a graph neural network architecture to solve the AC power flow problem under realistic constraints.
In our approach, we demonstrate the development of a framework that uses graph neural networks to learn the physical constraints of the power flow.
arXiv Detail & Related papers (2022-04-14T14:49:34Z) - Wireless Link Scheduling via Graph Representation Learning: A
Comparative Study of Different Supervision Levels [4.264192013842096]
We consider the problem of binary power control, or link scheduling, in wireless interference networks, where the power control policy is trained using graph representation learning.
We show how the node embeddings can be trained in several ways, including via supervised, unsupervised, and self-supervised learning.
arXiv Detail & Related papers (2021-10-04T21:22:12Z) - Iterative Graph Self-Distillation [161.04351580382078]
We propose a novel unsupervised graph learning paradigm called Iterative Graph Self-Distillation (IGSD)
IGSD iteratively performs the teacher-student distillation with graph augmentations.
We show that we achieve significant and consistent performance gain on various graph datasets in both unsupervised and semi-supervised settings.
arXiv Detail & Related papers (2020-10-23T18:37:06Z) - Graph Representation Learning via Graphical Mutual Information
Maximization [86.32278001019854]
We propose a novel concept, Graphical Mutual Information (GMI), to measure the correlation between input graphs and high-level hidden representations.
We develop an unsupervised learning model trained by maximizing GMI between the input and output of a graph neural encoder.
arXiv Detail & Related papers (2020-02-04T08:33:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.