A Meta-Learning Approach to the Optimal Power Flow Problem Under
Topology Reconfigurations
- URL: http://arxiv.org/abs/2012.11524v1
- Date: Mon, 21 Dec 2020 17:39:51 GMT
- Title: A Meta-Learning Approach to the Optimal Power Flow Problem Under
Topology Reconfigurations
- Authors: Yexiang Chen, Subhash Lakshminarayana, Carsten Maple, H. Vincent Poor
- Abstract summary: We propose a DNN-based OPF predictor that is trained using a meta-learning (MTL) approach.
The developed OPF-predictor is validated through simulations using benchmark IEEE bus systems.
- Score: 69.73803123972297
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, there has been a surge of interest in adopting deep neural networks
(DNNs) for solving the optimal power flow (OPF) problem in power systems.
Computing optimal generation dispatch decisions using a trained DNN takes
significantly less time when compared to using conventional optimization
solvers. However, a major drawback of existing work is that the machine
learning models are trained for a specific system topology. Hence, the DNN
predictions are only useful as long as the system topology remains unchanged.
Changes to the system topology (initiated by the system operator) would require
retraining the DNN, which incurs significant training overhead and requires an
extensive amount of training data (corresponding to the new system topology).
To overcome this drawback, we propose a DNN-based OPF predictor that is trained
using a meta-learning (MTL) approach. The key idea behind this approach is to
find a common initialization vector that enables fast training for any system
topology. The developed OPF-predictor is validated through simulations using
benchmark IEEE bus systems. The results show that the MTL approach achieves
significant training speeds-ups and requires only a few gradient steps with a
few data samples to achieve high OPF prediction accuracy.
Related papers
- Graph Neural Network-Accelerated Network-Reconfigured Optimal Power Flow [0.24554686192257422]
This paper proposes a machine learning (ML)-based approach, particularly utilizing graph neural network (GNN)
The GNN model is trained offline to predict the best topology before entering the optimization stage.
A fast online post-ML selection layer is also proposed to analyze GNN predictions and then select a subset of predicted NR solutions with high confidence.
arXiv Detail & Related papers (2024-10-22T22:35:09Z) - A Multi-Head Ensemble Multi-Task Learning Approach for Dynamical
Computation Offloading [62.34538208323411]
We propose a multi-head ensemble multi-task learning (MEMTL) approach with a shared backbone and multiple prediction heads (PHs)
MEMTL outperforms benchmark methods in both the inference accuracy and mean square error without requiring additional training data.
arXiv Detail & Related papers (2023-09-02T11:01:16Z) - Decouple Graph Neural Networks: Train Multiple Simple GNNs Simultaneously Instead of One [60.5818387068983]
Graph neural networks (GNN) suffer from severe inefficiency.
We propose to decouple a multi-layer GNN as multiple simple modules for more efficient training.
We show that the proposed framework is highly efficient with reasonable performance.
arXiv Detail & Related papers (2023-04-20T07:21:32Z) - Pretraining Graph Neural Networks for few-shot Analog Circuit Modeling
and Design [68.1682448368636]
We present a supervised pretraining approach to learn circuit representations that can be adapted to new unseen topologies or unseen prediction tasks.
To cope with the variable topological structure of different circuits we describe each circuit as a graph and use graph neural networks (GNNs) to learn node embeddings.
We show that pretraining GNNs on prediction of output node voltages can encourage learning representations that can be adapted to new unseen topologies or prediction of new circuit level properties.
arXiv Detail & Related papers (2022-03-29T21:18:47Z) - DNN Training Acceleration via Exploring GPGPU Friendly Sparsity [16.406482603838157]
We propose the Approximate Random Dropout that replaces the conventional random dropout of neurons and synapses with a regular and online generated row-based or tile-based dropout patterns.
We then develop a SGD-based Search Algorithm that produces the distribution of row-based or tile-based dropout patterns to compensate for the potential accuracy loss.
We also propose the sensitivity-aware dropout method to dynamically drop the input feature maps based on their sensitivity so as to achieve greater forward and backward training acceleration.
arXiv Detail & Related papers (2022-03-11T01:32:03Z) - Leveraging power grid topology in machine learning assisted optimal
power flow [0.5076419064097734]
Machine learning assisted optimal power flow (OPF) aims to reduce the computational complexity of non-linear and non- constrained power flow problems.
We assess the performance of a variety of FCNN, CNN and GNN models for two fundamental approaches to machine assisted OPF.
For several synthetic grids with interconnected utilities, we show that locality properties between feature and target variables are scarce.
arXiv Detail & Related papers (2021-10-01T10:39:53Z) - U-FNO -- an enhanced Fourier neural operator based-deep learning model
for multiphase flow [43.572675744374415]
We present U-FNO, an enhanced Fourier neural operator for solving the multiphase flow problem.
We show that the U-FNO architecture has the advantages of both traditional CNN and original FNO, providing significantly more accurate and efficient performance.
The trained U-FNO provides gas saturation and pressure buildup predictions with a 10,000 times speedup compared to traditional numerical simulators.
arXiv Detail & Related papers (2021-09-03T17:52:25Z) - Rank-R FNN: A Tensor-Based Learning Model for High-Order Data
Classification [69.26747803963907]
Rank-R Feedforward Neural Network (FNN) is a tensor-based nonlinear learning model that imposes Canonical/Polyadic decomposition on its parameters.
First, it handles inputs as multilinear arrays, bypassing the need for vectorization, and can thus fully exploit the structural information along every data dimension.
We establish the universal approximation and learnability properties of Rank-R FNN, and we validate its performance on real-world hyperspectral datasets.
arXiv Detail & Related papers (2021-04-11T16:37:32Z) - Learning to Solve the AC-OPF using Sensitivity-Informed Deep Neural
Networks [52.32646357164739]
We propose a deep neural network (DNN) to solve the solutions of the optimal power flow (ACOPF)
The proposed SIDNN is compatible with a broad range of OPF schemes.
It can be seamlessly integrated in other learning-to-OPF schemes.
arXiv Detail & Related papers (2021-03-27T00:45:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.