Initializing Successive Linear Programming Solver for ACOPF using
Machine Learning
- URL: http://arxiv.org/abs/2007.09210v1
- Date: Fri, 17 Jul 2020 20:01:55 GMT
- Title: Initializing Successive Linear Programming Solver for ACOPF using
Machine Learning
- Authors: Sayed Abdullah Sadat, Mostafa Sahraei-Ardakani
- Abstract summary: This paper examines various machine learning (ML) algorithms available in the Scikit-Learn library to initialize an SLP-ACOPF solver.
We evaluate the quality of each of these machine learning algorithms for predicting variables needed for a power flow solution.
The approach is tested on a congested and non-congested 3 bus systems.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A Successive linear programming (SLP) approach is one of the favorable
approaches for solving large scale nonlinear optimization problems. Solving an
alternating current optimal power flow (ACOPF) problem is no exception,
particularly considering the large real-world transmission networks across the
country. It is, however, essential to improve the computational performance of
the SLP algorithm. One way to achieve this goal is through the efficient
initialization of the algorithm with a near-optimal solution. This paper
examines various machine learning (ML) algorithms available in the Scikit-Learn
library to initialize an SLP-ACOPF solver, including examining linear and
nonlinear ML algorithms. We evaluate the quality of each of these machine
learning algorithms for predicting variables needed for a power flow solution.
The solution is then used as an initialization for an SLP-ACOPF algorithm. The
approach is tested on a congested and non-congested 3 bus systems. The results
obtained from the best-performed ML algorithm in this work are compared with
the results of a DCOPF solution for the initialization of an SLP-ACOPF solver.
Related papers
- Federated Conditional Stochastic Optimization [110.513884892319]
Conditional optimization has found in a wide range of machine learning tasks, such as in-variant learning tasks, AUPRC, andAML.
This paper proposes algorithms for distributed federated learning.
arXiv Detail & Related papers (2023-10-04T01:47:37Z) - Composite Optimization Algorithms for Sigmoid Networks [3.160070867400839]
We propose the composite optimization algorithms based on the linearized proximal algorithms and the alternating direction of multipliers.
Numerical experiments on Frank's function fitting show that the proposed algorithms perform satisfactorily robustly.
arXiv Detail & Related papers (2023-03-01T15:30:29Z) - Faster Adaptive Federated Learning [84.38913517122619]
Federated learning has attracted increasing attention with the emergence of distributed data.
In this paper, we propose an efficient adaptive algorithm (i.e., FAFED) based on momentum-based variance reduced technique in cross-silo FL.
arXiv Detail & Related papers (2022-12-02T05:07:50Z) - Exploring the Algorithm-Dependent Generalization of AUPRC Optimization
with List Stability [107.65337427333064]
optimization of the Area Under the Precision-Recall Curve (AUPRC) is a crucial problem for machine learning.
In this work, we present the first trial in the single-dependent generalization of AUPRC optimization.
Experiments on three image retrieval datasets on speak to the effectiveness and soundness of our framework.
arXiv Detail & Related papers (2022-09-27T09:06:37Z) - Optimizing Objective Functions from Trained ReLU Neural Networks via
Sampling [0.0]
This paper introduces scalable, sampling-based algorithms that optimize trained neural networks with ReLU activations.
We first propose an iterative algorithm that takes advantage of the piecewise linear structure of ReLU neural networks.
We then extend this approach by searching around the neighborhood of the LP solution computed at each iteration.
arXiv Detail & Related papers (2022-05-27T18:35:48Z) - Learning to Reformulate for Linear Programming [11.628932152805724]
We propose a reinforcement learning-based reformulation method for linear programming (LP) to improve the performance of solving process.
We implement the proposed method over two public research LP datasets and one large-scale LP dataset collected from practical production planning scenario.
arXiv Detail & Related papers (2022-01-17T04:58:46Z) - Naive Automated Machine Learning [0.0]
We present Naive AutoML, an approach that does precisely this: It optimize the different algorithms of a pre-defined pipeline scheme in isolation.
The isolated generalization leads to substantially reduced search spaces.
arXiv Detail & Related papers (2021-11-29T13:12:54Z) - Adaptive Sampling for Best Policy Identification in Markov Decision
Processes [79.4957965474334]
We investigate the problem of best-policy identification in discounted Markov Decision (MDPs) when the learner has access to a generative model.
The advantages of state-of-the-art algorithms are discussed and illustrated.
arXiv Detail & Related papers (2020-09-28T15:22:24Z) - Combining Deep Learning and Optimization for Security-Constrained
Optimal Power Flow [94.24763814458686]
Security-constrained optimal power flow (SCOPF) is fundamental in power systems.
Modeling of APR within the SCOPF problem results in complex large-scale mixed-integer programs.
This paper proposes a novel approach that combines deep learning and robust optimization techniques.
arXiv Detail & Related papers (2020-07-14T12:38:21Z) - Channel Assignment in Uplink Wireless Communication using Machine
Learning Approach [54.012791474906514]
This letter investigates a channel assignment problem in uplink wireless communication systems.
Our goal is to maximize the sum rate of all users subject to integer channel assignment constraints.
Due to high computational complexity, machine learning approaches are employed to obtain computational efficient solutions.
arXiv Detail & Related papers (2020-01-12T15:54:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.