Solving a Class of Cut-Generating Linear Programs via Machine Learning
- URL: http://arxiv.org/abs/2310.19920v1
- Date: Mon, 30 Oct 2023 18:31:52 GMT
- Title: Solving a Class of Cut-Generating Linear Programs via Machine Learning
- Authors: Atefeh Rajabalizadeh and Danial Davarnia
- Abstract summary: Cut-generating linear programs (CGLPs) play a key role as a separation oracle to produce valid inequalities for the feasible region of mixed-integer programs.
Running the dualPs at the nodes of the branch-and-bound tree is computationally cumbersome due to the number of node candidates and the lack of a priori knowledge on which nodes admit useful cutting planes.
We propose a novel framework based on learning to approximate the optimal value of a machineP class that determines whether a cutting plane can be generated at a node of the branch-bound tree.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Cut-generating linear programs (CGLPs) play a key role as a separation oracle
to produce valid inequalities for the feasible region of mixed-integer
programs. When incorporated inside branch-and-bound, the cutting planes
obtained from CGLPs help to tighten relaxations and improve dual bounds.
However, running the CGLPs at the nodes of the branch-and-bound tree is
computationally cumbersome due to the large number of node candidates and the
lack of a priori knowledge on which nodes admit useful cutting planes. As a
result, CGLPs are often avoided at default settings of branch-and-cut
algorithms despite their potential impact on improving dual bounds. In this
paper, we propose a novel framework based on machine learning to approximate
the optimal value of a CGLP class that determines whether a cutting plane can
be generated at a node of the branch-and-bound tree. Translating the CGLP as an
indicator function of the objective function vector, we show that it can be
approximated through conventional data classification techniques. We provide a
systematic procedure to efficiently generate training data sets for the
corresponding classification problem based on the CGLP structure. We conduct
computational experiments on benchmark instances using classification methods
such as logistic regression. These results suggest that the approximate CGLP
obtained from classification can improve the solution time compared to that of
conventional cutting plane methods. Our proposed framework can be efficiently
applied to a large number of nodes in the branch-and-bound tree to identify the
best candidates for adding a cut.
Related papers
- Understanding Gradient Boosting Classifier: Training, Prediction, and the Role of $γ_j$ [2.44755919161855]
The Gradient Boosting (GBC) is a widely used machine learning algorithm for binary classification.
This document explains the training and prediction processes, focusing on the computation of terminal node values.
We provide a step-by-step example in the appendix to help readers understand.
arXiv Detail & Related papers (2024-10-08T02:11:35Z) - Optimal Mixed Integer Linear Optimization Trained Multivariate Classification Trees [0.0]
We propose two cut-based mixed integer linear optimization (MILO) formulations for designing optimal binary classification trees.
Our models leverage on-the-fly identification of minimal infeasible subsystems (MISs) from which we derive cutting planes that hold the form of packing constraints.
arXiv Detail & Related papers (2024-08-02T14:37:28Z) - Learning Cut Generating Functions for Integer Programming [1.1510009152620668]
The branch-and-cut algorithm is the method of choice to solve large scale integer programming problems.
Recent advances have employed a data-driven approach to select optimal cutting planes from a parameterized family.
We show that the selected CGF can outperform the GMI cuts for certain distributions.
arXiv Detail & Related papers (2024-05-22T20:56:34Z) - MeanCut: A Greedy-Optimized Graph Clustering via Path-based Similarity
and Degree Descent Criterion [0.6906005491572401]
spectral clustering is popular and attractive due to the remarkable performance, easy implementation, and strong adaptability.
We propose MeanCut as the objective function and greedily optimize it in degree descending order for a nondestructive graph partition.
The validity of our algorithm is demonstrated by testifying on real-world benchmarks and application of face recognition.
arXiv Detail & Related papers (2023-12-07T06:19:39Z) - One-step Bipartite Graph Cut: A Normalized Formulation and Its
Application to Scalable Subspace Clustering [56.81492360414741]
We show how to enforce a one-step normalized cut for bipartite graphs, especially with linear-time complexity.
In this paper, we first characterize a novel one-step bipartite graph cut criterion with normalized constraints, and theoretically prove its equivalence to a trace problem.
We extend this cut criterion to a scalable subspace clustering approach, where adaptive anchor learning, bipartite graph learning, and one-step normalized bipartite graph partitioning are simultaneously modeled.
arXiv Detail & Related papers (2023-05-12T11:27:20Z) - General Cutting Planes for Bound-Propagation-Based Neural Network
Verification [144.7290035694459]
We generalize the bound propagation procedure to allow the addition of arbitrary cutting plane constraints.
We find that MIP solvers can generate high-quality cutting planes for strengthening bound-propagation-based verifiers.
Our method is the first verifier that can completely solve the oval20 benchmark and verify twice as many instances on the oval21 benchmark.
arXiv Detail & Related papers (2022-08-11T10:31:28Z) - Unfolding Projection-free SDP Relaxation of Binary Graph Classifier via
GDPA Linearization [59.87663954467815]
Algorithm unfolding creates an interpretable and parsimonious neural network architecture by implementing each iteration of a model-based algorithm as a neural layer.
In this paper, leveraging a recent linear algebraic theorem called Gershgorin disc perfect alignment (GDPA), we unroll a projection-free algorithm for semi-definite programming relaxation (SDR) of a binary graph.
Experimental results show that our unrolled network outperformed pure model-based graph classifiers, and achieved comparable performance to pure data-driven networks but using far fewer parameters.
arXiv Detail & Related papers (2021-09-10T07:01:15Z) - COPS: Controlled Pruning Before Training Starts [68.8204255655161]
State-of-the-art deep neural network (DNN) pruning techniques, applied one-shot before training starts, evaluate sparse architectures with the help of a single criterion -- called pruning score.
In this work we do not concentrate on a single pruning criterion, but provide a framework for combining arbitrary GSSs to create more powerful pruning strategies.
arXiv Detail & Related papers (2021-07-27T08:48:01Z) - MurTree: Optimal Classification Trees via Dynamic Programming and Search [61.817059565926336]
We present a novel algorithm for learning optimal classification trees based on dynamic programming and search.
Our approach uses only a fraction of the time required by the state-of-the-art and can handle datasets with tens of thousands of instances.
arXiv Detail & Related papers (2020-07-24T17:06:55Z) - Fine-Grained Visual Classification with Efficient End-to-end
Localization [49.9887676289364]
We present an efficient localization module that can be fused with a classification network in an end-to-end setup.
We evaluate the new model on the three benchmark datasets CUB200-2011, Stanford Cars and FGVC-Aircraft.
arXiv Detail & Related papers (2020-05-11T14:07:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.