Learning Regionally Decentralized AC Optimal Power Flows with ADMM
- URL: http://arxiv.org/abs/2205.03787v1
- Date: Sun, 8 May 2022 05:30:35 GMT
- Title: Learning Regionally Decentralized AC Optimal Power Flows with ADMM
- Authors: Terrence W.K. Mak, Minas Chatzos, Mathieu Tanneau, Pascal Van
Hentenryck
- Abstract summary: This paper studies how machine learning may help in speeding up the convergence of ADMM for solving AC-OPF.
It proposes a novel decentralized machine-learning approach, namely ML-ADMM, where each agent uses deep learning to learn the consensus parameters on the coupling branches.
- Score: 16.843799157160063
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: One potential future for the next generation of smart grids is the use of
decentralized optimization algorithms and secured communications for
coordinating renewable generation (e.g., wind/solar), dispatchable devices
(e.g., coal/gas/nuclear generations), demand response, battery & storage
facilities, and topology optimization. The Alternating Direction Method of
Multipliers (ADMM) has been widely used in the community to address such
decentralized optimization problems and, in particular, the AC Optimal Power
Flow (AC-OPF). This paper studies how machine learning may help in speeding up
the convergence of ADMM for solving AC-OPF. It proposes a novel decentralized
machine-learning approach, namely ML-ADMM, where each agent uses deep learning
to learn the consensus parameters on the coupling branches. The paper also
explores the idea of learning only from ADMM runs that exhibit high-quality
convergence properties, and proposes filtering mechanisms to select these runs.
Experimental results on test cases based on the French system demonstrate the
potential of the approach in speeding up the convergence of ADMM significantly.
Related papers
- Accelerating Multi-Block Constrained Optimization Through Learning to Optimize [9.221883233960234]
Multi-block ADMM-type methods offer substantial reductions in per-it complexity.
MPALM shares a similar form with multi-block ADMM and ensures convergence.
MPALM's performance is highly sensitive to the choice of penalty parameters.
We propose a novel L2O approach that adaptively selects this hyperparameter using supervised learning.
arXiv Detail & Related papers (2024-09-25T19:58:29Z) - AA-DLADMM: An Accelerated ADMM-based Framework for Training Deep Neural
Networks [1.3812010983144802]
gradient descent (SGD) and its many variants are the widespread optimization algorithms for training deep neural networks.
SGD suffers from inevitable drawbacks, including vanishing gradients, lack of theoretical guarantees, and substantial sensitivity to input.
This paper proposes an Anderson Acceleration for Deep Learning ADMM (AA-DLADMM) algorithm to tackle this drawback.
arXiv Detail & Related papers (2024-01-08T01:22:00Z) - Collaborative Intelligent Reflecting Surface Networks with Multi-Agent
Reinforcement Learning [63.83425382922157]
Intelligent reflecting surface (IRS) is envisioned to be widely applied in future wireless networks.
In this paper, we investigate a multi-user communication system assisted by cooperative IRS devices with the capability of energy harvesting.
arXiv Detail & Related papers (2022-03-26T20:37:14Z) - A Reinforcement Learning Approach to Parameter Selection for Distributed
Optimization in Power Systems [1.1199585259018459]
We develop an adaptive penalty parameter selection policy for the AC optimal power flow (ACOPF) problem solved via ADMM.
We show that our RL policy demonstrates promise for generalizability, performing well under unseen loading schemes as well as under unseen losses of lines and generators.
This work thus provides a proof-of-concept for using RL for parameter selection in ADMM for power systems applications.
arXiv Detail & Related papers (2021-10-22T18:17:32Z) - Efficient Model-Based Multi-Agent Mean-Field Reinforcement Learning [89.31889875864599]
We propose an efficient model-based reinforcement learning algorithm for learning in multi-agent systems.
Our main theoretical contributions are the first general regret bounds for model-based reinforcement learning for MFC.
We provide a practical parametrization of the core optimization problem.
arXiv Detail & Related papers (2021-07-08T18:01:02Z) - Adaptive Stochastic ADMM for Decentralized Reinforcement Learning in
Edge Industrial IoT [106.83952081124195]
Reinforcement learning (RL) has been widely investigated and shown to be a promising solution for decision-making and optimal control processes.
We propose an adaptive ADMM (asI-ADMM) algorithm and apply it to decentralized RL with edge-computing-empowered IIoT networks.
Experiment results show that our proposed algorithms outperform the state of the art in terms of communication costs and scalability, and can well adapt to complex IoT environments.
arXiv Detail & Related papers (2021-06-30T16:49:07Z) - Coded Stochastic ADMM for Decentralized Consensus Optimization with Edge
Computing [113.52575069030192]
Big data, including applications with high security requirements, are often collected and stored on multiple heterogeneous devices, such as mobile devices, drones and vehicles.
Due to the limitations of communication costs and security requirements, it is of paramount importance to extract information in a decentralized manner instead of aggregating data to a fusion center.
We consider the problem of learning model parameters in a multi-agent system with data locally processed via distributed edge nodes.
A class of mini-batch alternating direction method of multipliers (ADMM) algorithms is explored to develop the distributed learning model.
arXiv Detail & Related papers (2020-10-02T10:41:59Z) - Communication Efficient Distributed Learning with Censored, Quantized,
and Generalized Group ADMM [52.12831959365598]
We propose a communication-efficiently decentralized machine learning framework that solves a consensus optimization problem defined over a network of inter-connected workers.
The proposed algorithm, Censored and Quantized Generalized GADMM, leverages the worker grouping and decentralized learning ideas of Group Alternating Direction Method of Multipliers (GADMM)
Numerical simulations corroborate that CQ-GGADMM exhibits higher communication efficiency in terms of the number of communication rounds and transmit energy consumption without compromising the accuracy and convergence speed.
arXiv Detail & Related papers (2020-09-14T14:18:19Z) - Learning Centric Power Allocation for Edge Intelligence [84.16832516799289]
Edge intelligence has been proposed, which collects distributed data and performs machine learning at the edge.
This paper proposes a learning centric power allocation (LCPA) method, which allocates radio resources based on an empirical classification error model.
Experimental results show that the proposed LCPA algorithm significantly outperforms other power allocation algorithms.
arXiv Detail & Related papers (2020-07-21T07:02:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.