Cutting Plane Selection with Analytic Centers and Multiregression
- URL: http://arxiv.org/abs/2212.07231v2
- Date: Thu, 15 Dec 2022 21:58:01 GMT
- Title: Cutting Plane Selection with Analytic Centers and Multiregression
- Authors: Mark Turner, Timo Berthold, Mathieu Besan\c{c}on, Thorsten Koch
- Abstract summary: We propose new distance-based measures to qualify the value of a cut by quantifying the extent to which it separates relevant parts of the relaxed feasible set.
We assess the impact of the choice of distance measure on root node performance and throughout the whole branch-and-bound tree.
Our results indicate that analytic center-based methods help to significantly reduce the number of branch-and-bound nodes needed to explore the search space.
- Score: 2.8757106397019228
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Cutting planes are a crucial component of state-of-the-art mixed-integer
programming solvers, with the choice of which subset of cuts to add being vital
for solver performance. We propose new distance-based measures to qualify the
value of a cut by quantifying the extent to which it separates relevant parts
of the relaxed feasible set. For this purpose, we use the analytic centers of
the relaxation polytope or of its optimal face, as well as alternative optimal
solutions of the linear programming relaxation. We assess the impact of the
choice of distance measure on root node performance and throughout the whole
branch-and-bound tree, comparing our measures against those prevalent in the
literature. Finally, by a multi-output regression, we predict the relative
performance of each measure, using static features readily available before the
separation process. Our results indicate that analytic center-based methods
help to significantly reduce the number of branch-and-bound nodes needed to
explore the search space and that our multiregression approach can further
improve on any individual method.
Related papers
- Predictive Coresets [0.0]
Conventional coreset approaches determine weights by minimizing the Kullback-Leibler divergence between the likelihood functions of the full and weighted datasets.
We propose an alternative variational method which employs randomized posteriors and finds weights to match the unknown posterior predictive distributions conditioned on the full and reduced datasets.
We evaluate the performance of the proposed coreset construction on diverse problems, including random partitions and density estimation.
arXiv Detail & Related papers (2025-02-08T23:57:43Z) - Learning to Remove Cuts in Integer Linear Programming [57.15699051569638]
We consider the removal of previous cuts introduced at any of the preceding iterations of a method under a learnable parametric criteria.
We demonstrate that in fundamental optimization settings such cut removal policies can lead to significant improvements over both human-based and machine learning-guided cut addition policies.
arXiv Detail & Related papers (2024-06-26T22:50:43Z) - Optimal Baseline Corrections for Off-Policy Contextual Bandits [61.740094604552475]
We aim to learn decision policies that optimize an unbiased offline estimate of an online reward metric.
We propose a single framework built on their equivalence in learning scenarios.
Our framework enables us to characterize the variance-optimal unbiased estimator and provide a closed-form solution for it.
arXiv Detail & Related papers (2024-05-09T12:52:22Z) - Differentially Private Optimization with Sparse Gradients [60.853074897282625]
We study differentially private (DP) optimization problems under sparsity of individual gradients.
Building on this, we obtain pure- and approximate-DP algorithms with almost optimal rates for convex optimization with sparse gradients.
arXiv Detail & Related papers (2024-04-16T20:01:10Z) - Approximation of a Pareto Set Segment Using a Linear Model with Sharing Variables [14.161627541155775]
We develop a performance metric that considers both optimality and variable sharing.
We then design an algorithm for finding the model that minimizes the metric to meet the user's requirements.
Experimental results illustrate that we can obtain a linear model that approximates the mapping from the preference vectors to solutions in a local area well.
arXiv Detail & Related papers (2024-03-30T05:42:07Z) - Exploiting Temporal Structures of Cyclostationary Signals for
Data-Driven Single-Channel Source Separation [98.95383921866096]
We study the problem of single-channel source separation (SCSS)
We focus on cyclostationary signals, which are particularly suitable in a variety of application domains.
We propose a deep learning approach using a U-Net architecture, which is competitive with the minimum MSE estimator.
arXiv Detail & Related papers (2022-08-22T14:04:56Z) - Outlier-Robust Sparse Estimation via Non-Convex Optimization [73.18654719887205]
We explore the connection between high-dimensional statistics and non-robust optimization in the presence of sparsity constraints.
We develop novel and simple optimization formulations for these problems.
As a corollary, we obtain that any first-order method that efficiently converges to station yields an efficient algorithm for these tasks.
arXiv Detail & Related papers (2021-09-23T17:38:24Z) - Piecewise linear regression and classification [0.20305676256390928]
This paper proposes a method for solving multivariate regression and classification problems using piecewise linear predictors.
A Python implementation of the algorithm described in this paper is available at http://cse.lab.imtlucca.it/bemporad/parc.
arXiv Detail & Related papers (2021-03-10T17:07:57Z) - Stochastic Optimization Forests [60.523606291705214]
We show how to train forest decision policies by growing trees that choose splits to directly optimize the downstream decision quality, rather than splitting to improve prediction accuracy as in the standard random forest algorithm.
We show that our approximate splitting criteria can reduce running time hundredfold, while achieving performance close to forest algorithms that exactly re-optimize for every candidate split.
arXiv Detail & Related papers (2020-08-17T16:56:06Z) - How to choose the most appropriate centrality measure? A decision tree approach [0.0]
We introduce the culling method, which relies on the expert concept of centrality behavior on simple graphs.
We apply this approach to a diverse set of 40 centralities, including novel kernel-based indices.
Applying the culling method provides insightful findings on some centrality indices.
arXiv Detail & Related papers (2020-03-02T17:38:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.