Optimal Power Flow Based on Physical-Model-Integrated Neural Network
with Worth-Learning Data Generation
- URL: http://arxiv.org/abs/2301.03766v1
- Date: Tue, 10 Jan 2023 03:06:08 GMT
- Title: Optimal Power Flow Based on Physical-Model-Integrated Neural Network
with Worth-Learning Data Generation
- Authors: Zuntao Hu and Hongcai Zhang
- Abstract summary: We propose an OPF solver based on a physical-model-integrated neural network (NN) with worth-learning data generation.
We show that the proposed method leads to an over 50% reduction of constraint violations and optimality loss compared to conventional NN solvers.
- Score: 1.370633147306388
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Fast and reliable solvers for optimal power flow (OPF) problems are
attracting surging research interest. As surrogates of physical-model-based OPF
solvers, neural network (NN) solvers can accelerate the solving process.
However, they may be unreliable for ``unseen" inputs when the training dataset
is unrepresentative. Enhancing the representativeness of the training dataset
for NN solvers is indispensable but is not well studied in the literature. To
tackle this challenge, we propose an OPF solver based on a
physical-model-integrated NN with worth-learning data generation. The designed
NN is a combination of a conventional multi-layer perceptron (MLP) and an
OPF-model module, which outputs not only the optimal decision variables of the
OPF problem but also the constraints violation degree. Based on this NN, the
worth-learning data generation method can identify feasible samples that are
not well generalized by the NN. By iteratively applying this method and
including the newly identified worth-learning samples in the training set, the
representativeness of the training set can be significantly enhanced.
Therefore, the solution reliability of the NN solver can be remarkably
improved. Experimental results show that the proposed method leads to an over
50% reduction of constraint violations and optimality loss compared to
conventional NN solvers.
Related papers
- The Finite Element Neural Network Method: One Dimensional Study [0.0]
This research introduces the finite element neural network method (FENNM) within the framework of the Petrov-Galerkin method.
FENNM uses convolution operations to approximate the weighted residual of the differential equations.
This enables the integration of forcing terms and natural boundary conditions into the loss function similar to conventional finite element method (FEM) solvers.
arXiv Detail & Related papers (2025-01-21T21:39:56Z) - Over-the-Air Fair Federated Learning via Multi-Objective Optimization [52.295563400314094]
We propose an over-the-air fair federated learning algorithm (OTA-FFL) to train fair FL models.
Experiments demonstrate the superiority of OTA-FFL in achieving fairness and robust performance.
arXiv Detail & Related papers (2025-01-06T21:16:51Z) - A Multi-Head Ensemble Multi-Task Learning Approach for Dynamical
Computation Offloading [62.34538208323411]
We propose a multi-head ensemble multi-task learning (MEMTL) approach with a shared backbone and multiple prediction heads (PHs)
MEMTL outperforms benchmark methods in both the inference accuracy and mean square error without requiring additional training data.
arXiv Detail & Related papers (2023-09-02T11:01:16Z) - Efficient and Flexible Neural Network Training through Layer-wise Feedback Propagation [49.44309457870649]
We present Layer-wise Feedback Propagation (LFP), a novel training principle for neural network-like predictors.
LFP decomposes a reward to individual neurons based on their respective contributions to solving a given task.
Our method then implements a greedy approach reinforcing helpful parts of the network and weakening harmful ones.
arXiv Detail & Related papers (2023-08-23T10:48:28Z) - Learning k-Level Structured Sparse Neural Networks Using Group Envelope Regularization [4.0554893636822]
We introduce a novel approach to deploy large-scale Deep Neural Networks on constrained resources.
The method speeds up inference time and aims to reduce memory demand and power consumption.
arXiv Detail & Related papers (2022-12-25T15:40:05Z) - Leveraging power grid topology in machine learning assisted optimal
power flow [0.5076419064097734]
Machine learning assisted optimal power flow (OPF) aims to reduce the computational complexity of non-linear and non- constrained power flow problems.
We assess the performance of a variety of FCNN, CNN and GNN models for two fundamental approaches to machine assisted OPF.
For several synthetic grids with interconnected utilities, we show that locality properties between feature and target variables are scarce.
arXiv Detail & Related papers (2021-10-01T10:39:53Z) - Learning to Solve the AC-OPF using Sensitivity-Informed Deep Neural
Networks [52.32646357164739]
We propose a deep neural network (DNN) to solve the solutions of the optimal power flow (ACOPF)
The proposed SIDNN is compatible with a broad range of OPF schemes.
It can be seamlessly integrated in other learning-to-OPF schemes.
arXiv Detail & Related papers (2021-03-27T00:45:23Z) - A Meta-Learning Approach to the Optimal Power Flow Problem Under
Topology Reconfigurations [69.73803123972297]
We propose a DNN-based OPF predictor that is trained using a meta-learning (MTL) approach.
The developed OPF-predictor is validated through simulations using benchmark IEEE bus systems.
arXiv Detail & Related papers (2020-12-21T17:39:51Z) - Modeling from Features: a Mean-field Framework for Over-parameterized
Deep Neural Networks [54.27962244835622]
This paper proposes a new mean-field framework for over- parameterized deep neural networks (DNNs)
In this framework, a DNN is represented by probability measures and functions over its features in the continuous limit.
We illustrate the framework via the standard DNN and the Residual Network (Res-Net) architectures.
arXiv Detail & Related papers (2020-07-03T01:37:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.