Solve Large-scale Unit Commitment Problems by Physics-informed Graph
Learning
- URL: http://arxiv.org/abs/2311.15216v1
- Date: Sun, 26 Nov 2023 07:17:45 GMT
- Title: Solve Large-scale Unit Commitment Problems by Physics-informed Graph
Learning
- Authors: Jingtao Qin, Nanpeng Yu
- Abstract summary: Unit commitment (UC) problems are typically formulated as mixed-integer programs (MIP) and solved by the branch-and-bound (B&B) scheme.
Recent advances in graph neural networks (GNN) enable it to enhance the B&B algorithm in modern MIP solvers by learning to dive and branch.
- Score: 1.1748284119769041
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unit commitment (UC) problems are typically formulated as mixed-integer
programs (MIP) and solved by the branch-and-bound (B&B) scheme. The recent
advances in graph neural networks (GNN) enable it to enhance the B&B algorithm
in modern MIP solvers by learning to dive and branch. Existing GNN models that
tackle MIP problems are mostly constructed from mathematical formulation, which
is computationally expensive when dealing with large-scale UC problems. In this
paper, we propose a physics-informed hierarchical graph convolutional network
(PI-GCN) for neural diving that leverages the underlying features of various
components of power systems to find high-quality variable assignments.
Furthermore, we adopt the MIP model-based graph convolutional network (MB-GCN)
for neural branching to select the optimal variables for branching at each node
of the B&B tree. Finally, we integrate neural diving and neural branching into
a modern MIP solver to establish a novel neural MIP solver designed for
large-scale UC problems. Numeral studies show that PI-GCN has better
performance and scalability than the baseline MB-GCN on neural diving.
Moreover, the neural MIP solver yields the lowest operational cost and
outperforms a modern MIP solver for all testing days after combining it with
our proposed neural diving model and the baseline neural branching model.
Related papers
- Deep learning enhanced mixed integer optimization: Learning to reduce model dimensionality [0.0]
This work introduces a framework to address the computational complexity inherent in Mixed-Integer Programming.
By employing deep learning, we construct problem-specific models that identify and exploit common structures across MIP instances.
We present an algorithm for generating synthetic data enhancing the robustness and generalizability of our models.
arXiv Detail & Related papers (2024-01-17T19:15:13Z) - NN-Steiner: A Mixed Neural-algorithmic Approach for the Rectilinear
Steiner Minimum Tree Problem [5.107107601277712]
We focus on the rectilinear Steiner minimum tree (RSMT) problem, which is of critical importance in IC layout design.
We propose NN-Steiner, which is a novel mixed neural-algorithmic framework for computing RSMTs.
In particular, NN-Steiner only needs four neural network (NN) components that are called repeatedly within an algorithmic framework.
arXiv Detail & Related papers (2023-12-17T02:42:11Z) - Mixed-Integer Optimisation of Graph Neural Networks for Computer-Aided
Molecular Design [4.593587844188084]
ReLU neural networks have been modelled as constraints in mixed integer linear programming (MILP)
We propose a formulation for ReLU Graph Convolutional Neural Networks and a MILP formulation for ReLU GraphSAGE models.
These formulations enable solving optimisation problems with trained GNNs embedded to global optimality.
arXiv Detail & Related papers (2023-12-02T21:10:18Z) - MISNN: Multiple Imputation via Semi-parametric Neural Networks [9.594714330925703]
Multiple imputation (MI) has been widely applied to missing value problems in biomedical, social and econometric research.
We propose MISNN, a novel and efficient algorithm that incorporates feature selection for MI.
arXiv Detail & Related papers (2023-05-02T21:45:36Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - Training Feedback Spiking Neural Networks by Implicit Differentiation on
the Equilibrium State [66.2457134675891]
Spiking neural networks (SNNs) are brain-inspired models that enable energy-efficient implementation on neuromorphic hardware.
Most existing methods imitate the backpropagation framework and feedforward architectures for artificial neural networks.
We propose a novel training method that does not rely on the exact reverse of the forward computation.
arXiv Detail & Related papers (2021-09-29T07:46:54Z) - Non-Gradient Manifold Neural Network [79.44066256794187]
Deep neural network (DNN) generally takes thousands of iterations to optimize via gradient descent.
We propose a novel manifold neural network based on non-gradient optimization.
arXiv Detail & Related papers (2021-06-15T06:39:13Z) - Graph Neural Networks for Scalable Radio Resource Management:
Architecture Design and Theoretical Analysis [31.372548374969387]
We propose to apply graph neural networks (GNNs) to solve large-scale radio resource management problems.
The proposed method is highly scalable and can solve the beamforming problem in an interference channel with $1000$ transceiver pairs within $6$ milliseconds on a single GPU.
arXiv Detail & Related papers (2020-07-15T11:43:32Z) - Modeling from Features: a Mean-field Framework for Over-parameterized
Deep Neural Networks [54.27962244835622]
This paper proposes a new mean-field framework for over- parameterized deep neural networks (DNNs)
In this framework, a DNN is represented by probability measures and functions over its features in the continuous limit.
We illustrate the framework via the standard DNN and the Residual Network (Res-Net) architectures.
arXiv Detail & Related papers (2020-07-03T01:37:16Z) - Multipole Graph Neural Operator for Parametric Partial Differential
Equations [57.90284928158383]
One of the main challenges in using deep learning-based methods for simulating physical systems is formulating physics-based data.
We propose a novel multi-level graph neural network framework that captures interaction at all ranges with only linear complexity.
Experiments confirm our multi-graph network learns discretization-invariant solution operators to PDEs and can be evaluated in linear time.
arXiv Detail & Related papers (2020-06-16T21:56:22Z) - Communication-Efficient Distributed Stochastic AUC Maximization with
Deep Neural Networks [50.42141893913188]
We study a distributed variable for large-scale AUC for a neural network as with a deep neural network.
Our model requires a much less number of communication rounds and still a number of communication rounds in theory.
Our experiments on several datasets show the effectiveness of our theory and also confirm our theory.
arXiv Detail & Related papers (2020-05-05T18:08:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.