Data-driven Optimal Power Flow: A Physics-Informed Machine Learning
Approach
- URL: http://arxiv.org/abs/2006.00544v1
- Date: Sun, 31 May 2020 15:41:24 GMT
- Title: Data-driven Optimal Power Flow: A Physics-Informed Machine Learning
Approach
- Authors: Xingyu Lei, Zhifang Yang, Juan Yu, Junbo Zhao, Qian Gao, Hongxin Yu
- Abstract summary: This paper proposes a data-driven approach for optimal power flow (OPF) based on the stacked extreme learning machine (SELM) framework.
A data-driven OPF regression framework is developed that decomposes the OPF model features into three stages.
Numerical results carried out on IEEE and Polish benchmark systems demonstrate that the proposed method outperforms other alternatives.
- Score: 6.5382276424254995
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper proposes a data-driven approach for optimal power flow (OPF) based
on the stacked extreme learning machine (SELM) framework. SELM has a fast
training speed and does not require the time-consuming parameter tuning process
compared with the deep learning algorithms. However, the direct application of
SELM for OPF is not tractable due to the complicated relationship between the
system operating status and the OPF solutions. To this end, a data-driven OPF
regression framework is developed that decomposes the OPF model features into
three stages. This not only reduces the learning complexity but also helps
correct the learning bias. A sample pre-classification strategy based on active
constraint identification is also developed to achieve enhanced feature
attractions. Numerical results carried out on IEEE and Polish benchmark systems
demonstrate that the proposed method outperforms other alternatives. It is also
shown that the proposed method can be easily extended to address different test
systems by adjusting only a few hyperparameters.
Related papers
- Federated Learning of Large Language Models with Parameter-Efficient
Prompt Tuning and Adaptive Optimization [71.87335804334616]
Federated learning (FL) is a promising paradigm to enable collaborative model training with decentralized data.
The training process of Large Language Models (LLMs) generally incurs the update of significant parameters.
This paper proposes an efficient partial prompt tuning approach to improve performance and efficiency simultaneously.
arXiv Detail & Related papers (2023-10-23T16:37:59Z) - FedDA: Faster Framework of Local Adaptive Gradient Methods via Restarted
Dual Averaging [104.41634756395545]
Federated learning (FL) is an emerging learning paradigm to tackle massively distributed data.
We propose textbfFedDA, a novel framework for local adaptive gradient methods.
We show that textbfFedDA-MVR is the first adaptive FL algorithm that achieves this rate.
arXiv Detail & Related papers (2023-02-13T05:10:30Z) - Faster Adaptive Federated Learning [84.38913517122619]
Federated learning has attracted increasing attention with the emergence of distributed data.
In this paper, we propose an efficient adaptive algorithm (i.e., FAFED) based on momentum-based variance reduced technique in cross-silo FL.
arXiv Detail & Related papers (2022-12-02T05:07:50Z) - FedGPO: Heterogeneity-Aware Global Parameter Optimization for Efficient
Federated Learning [11.093360539563657]
Federated learning (FL) has emerged as a solution to deal with the risk of privacy leaks in machine learning training.
We propose FedGPO to optimize the energy-efficiency of FL use cases while guaranteeing model convergence.
In our experiments, FedGPO improves the model convergence time by 2.4 times, and achieves 3.6 times higher energy efficiency over the baseline settings.
arXiv Detail & Related papers (2022-11-30T01:22:57Z) - Towards Understanding the Unreasonable Effectiveness of Learning AC-OPF
Solutions [31.388212637482365]
Optimal Power Flow (OPF) is a fundamental problem in power systems.
Recent research has proposed the use of Deep Neural Networks (DNNs) to find OPF approximations at vastly reduced runtimes.
This paper provides a step forward to address this knowledge gap.
arXiv Detail & Related papers (2021-11-22T13:04:31Z) - Hybrid Federated Learning: Algorithms and Implementation [61.0640216394349]
Federated learning (FL) is a recently proposed distributed machine learning paradigm dealing with distributed and private data sets.
We propose a new model-matching-based problem formulation for hybrid FL.
We then propose an efficient algorithm that can collaboratively train the global and local models to deal with full and partial featured data.
arXiv Detail & Related papers (2020-12-22T23:56:03Z) - A Meta-Learning Approach to the Optimal Power Flow Problem Under
Topology Reconfigurations [69.73803123972297]
We propose a DNN-based OPF predictor that is trained using a meta-learning (MTL) approach.
The developed OPF-predictor is validated through simulations using benchmark IEEE bus systems.
arXiv Detail & Related papers (2020-12-21T17:39:51Z) - Federated Learning via Intelligent Reflecting Surface [30.935389187215474]
Over-the-air computation algorithm (AirComp) based learning (FL) is capable of achieving fast model aggregation by exploiting the waveform superposition property of multiple access channels.
In this paper, we propose a two-step optimization framework to achieve fast yet reliable model aggregation for AirComp-based FL.
Simulation results will demonstrate that our proposed framework and the deployment of an IRS can achieve a lower training loss and higher FL prediction accuracy than the baseline algorithms.
arXiv Detail & Related papers (2020-11-10T11:29:57Z) - FedPD: A Federated Learning Framework with Optimal Rates and Adaptivity
to Non-IID Data [59.50904660420082]
Federated Learning (FL) has become a popular paradigm for learning from distributed data.
To effectively utilize data at different devices without moving them to the cloud, algorithms such as the Federated Averaging (FedAvg) have adopted a "computation then aggregation" (CTA) model.
arXiv Detail & Related papers (2020-05-22T23:07:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.