Power Flow Approximations for Multiphase Distribution Networks using Gaussian Processes
- URL: http://arxiv.org/abs/2504.21260v1
- Date: Wed, 30 Apr 2025 02:26:31 GMT
- Title: Power Flow Approximations for Multiphase Distribution Networks using Gaussian Processes
- Authors: Daniel Glover, Parikshit Pareek, Deepjyoti Deka, Anamika Dubey,
- Abstract summary: This study introduces a data-driven power flow method based on Gaussian Processes (GPs) to approximate the multiphase power flow model.<n> Simulation results using the IEEE 123-bus and 8500-node distribution test feeders demonstrate that the trained GP model can reliably predict the nonlinear power flow solutions.<n>We also conduct a comparative analysis of the training efficiency and testing performance of the proposed GP-based power flow approximator against a deep neural network-based approximator.
- Score: 0.2812395851874055
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning-based approaches are increasingly leveraged to manage and coordinate the operation of grid-edge resources in active power distribution networks. Among these, model-based techniques stand out for their superior data efficiency and robustness compared to model-free methods. However, effective model learning requires a learning-based approximator for the underlying power flow model. This study extends existing work by introducing a data-driven power flow method based on Gaussian Processes (GPs) to approximate the multiphase power flow model, by mapping net load injections to nodal voltages. Simulation results using the IEEE 123-bus and 8500-node distribution test feeders demonstrate that the trained GP model can reliably predict the nonlinear power flow solutions with minimal training data. We also conduct a comparative analysis of the training efficiency and testing performance of the proposed GP-based power flow approximator against a deep neural network-based approximator, highlighting the advantages of our data-efficient approach. Results over realistic operating conditions show that despite an 85% reduction in the training sample size (corresponding to a 92.8% improvement in training time), GP models produce a 99.9% relative reduction in mean absolute error compared to the baselines of deep neural networks.
Related papers
- Nonparametric Data Attribution for Diffusion Models [57.820618036556084]
Data attribution for generative models seeks to quantify the influence of individual training examples on model outputs.<n>We propose a nonparametric attribution method that operates entirely on data, measuring influence via patch-level similarity between generated and training images.
arXiv Detail & Related papers (2025-10-16T03:37:16Z) - Efficient Federated Learning with Timely Update Dissemination [54.668309196009204]
Federated Learning (FL) has emerged as a compelling methodology for the management of distributed data.<n>We propose an efficient FL approach that capitalizes on additional downlink bandwidth resources to ensure timely update dissemination.
arXiv Detail & Related papers (2025-07-08T14:34:32Z) - Generative Diffusion Models for Resource Allocation in Wireless Networks [77.36145730415045]
We train a policy to imitate an expert and generate new samples from the optimal distribution.<n>We achieve near-optimal performance through sequential execution of the generated samples.<n>We present numerical results in a case study of power control in multi-user interference networks.
arXiv Detail & Related papers (2025-04-28T21:44:31Z) - FlowTS: Time Series Generation via Rectified Flow [67.41208519939626]
FlowTS is an ODE-based model that leverages rectified flow with straight-line transport in probability space.
For unconditional setting, FlowTS achieves state-of-the-art performance, with context FID scores of 0.019 and 0.011 on Stock and ETTh datasets.
For conditional setting, we have achieved superior performance in solar forecasting.
arXiv Detail & Related papers (2024-11-12T03:03:23Z) - Understanding Reinforcement Learning-Based Fine-Tuning of Diffusion Models: A Tutorial and Review [63.31328039424469]
This tutorial provides a comprehensive survey of methods for fine-tuning diffusion models to optimize downstream reward functions.
We explain the application of various RL algorithms, including PPO, differentiable optimization, reward-weighted MLE, value-weighted sampling, and path consistency learning.
arXiv Detail & Related papers (2024-07-18T17:35:32Z) - Streamflow Prediction with Uncertainty Quantification for Water Management: A Constrained Reasoning and Learning Approach [27.984958596544278]
This paper studies a constrained reasoning and learning (CRL) approach where physical laws represented as logical constraints are integrated as a layer in the deep neural network.
To address small data setting, we develop a theoretically-grounded training approach to improve the generalization accuracy of deep models.
arXiv Detail & Related papers (2024-05-31T18:53:53Z) - Predicting Traffic Flow with Federated Learning and Graph Neural with Asynchronous Computations Network [0.0]
We present a novel deep-learning method called Federated Learning and Asynchronous Graph Convolutional Networks (FLAGCN)
Our framework incorporates the principles of asynchronous graph convolutional networks with federated learning to enhance accuracy and efficiency of real-time traffic flow prediction.
arXiv Detail & Related papers (2024-01-05T09:36:42Z) - PowerFlowNet: Power Flow Approximation Using Message Passing Graph
Neural Networks [2.450802099490248]
Graph Neural Networks (GNNs) have emerged as a promising approach for improving the accuracy and speed of power flow approximations.
In this study, we introduce PowerFlowNet, a novel GNN architecture for PF approximation that showcases similar performance with the traditional Newton-Raphson method.
It significantly outperforms other traditional approximation methods, such as the DC relaxation method, in terms of performance and execution time.
arXiv Detail & Related papers (2023-11-06T09:44:00Z) - End-to-End Reinforcement Learning of Koopman Models for Economic Nonlinear Model Predictive Control [45.84205238554709]
We present a method for reinforcement learning of Koopman surrogate models for optimal performance as part of (e)NMPC.
We show that the end-to-end trained models outperform those trained using system identification in (e)NMPC.
arXiv Detail & Related papers (2023-08-03T10:21:53Z) - Towards a Better Theoretical Understanding of Independent Subnetwork Training [56.24689348875711]
We take a closer theoretical look at Independent Subnetwork Training (IST)
IST is a recently proposed and highly effective technique for solving the aforementioned problems.
We identify fundamental differences between IST and alternative approaches, such as distributed methods with compressed communication.
arXiv Detail & Related papers (2023-06-28T18:14:22Z) - Towards Compute-Optimal Transfer Learning [82.88829463290041]
We argue that zero-shot structured pruning of pretrained models allows them to increase compute efficiency with minimal reduction in performance.
Our results show that pruning convolutional filters of pretrained models can lead to more than 20% performance improvement in low computational regimes.
arXiv Detail & Related papers (2023-04-25T21:49:09Z) - Normalizing flow neural networks by JKO scheme [22.320632565424745]
We develop a neural ODE flow network called JKO-iFlow, inspired by the Jordan-Kinderleherer-Otto scheme.
The proposed method stacks residual blocks one after another, allowing efficient block-wise training of the residual blocks.
Experiments with synthetic and real data show that the proposed JKO-iFlow network achieves competitive performance.
arXiv Detail & Related papers (2022-12-29T18:55:00Z) - Efficient Learning of Voltage Control Strategies via Model-based Deep
Reinforcement Learning [9.936452412191326]
This article proposes a model-based deep reinforcement learning (DRL) method to design emergency control strategies for short-term voltage stability problems in power systems.
Recent advances show promising results in model-free DRL-based methods for power systems, but model-free methods suffer from poor sample efficiency and training time.
We propose a novel model-based-DRL framework where a deep neural network (DNN)-based dynamic surrogate model is utilized with the policy learning framework.
arXiv Detail & Related papers (2022-12-06T02:50:53Z) - Understanding the Effects of Data Parallelism and Sparsity on Neural
Network Training [126.49572353148262]
We study two factors in neural network training: data parallelism and sparsity.
Despite their promising benefits, understanding of their effects on neural network training remains elusive.
arXiv Detail & Related papers (2020-03-25T10:49:22Z) - Deep Residual Flow for Out of Distribution Detection [27.218308616245164]
We present a novel approach that improves upon the state-of-the-art by leveraging an expressive density model based on normalizing flows.
We demonstrate the effectiveness of our method in ResNet and DenseNet architectures trained on various image datasets.
arXiv Detail & Related papers (2020-01-15T16:38:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.