Global Optimization: A Machine Learning Approach
- URL: http://arxiv.org/abs/2311.01742v1
- Date: Fri, 3 Nov 2023 06:33:38 GMT
- Title: Global Optimization: A Machine Learning Approach
- Authors: Dimitris Bertsimas, Georgios Margaritis
- Abstract summary: Bertsimas and Ozturk (2023) proposed OCTHaGOn as a way of solving black-box global optimization problems.
We provide extensions to this approach by approximating the original problem using other MIO-representable ML models.
We show improvements in solution feasibility and optimality in the majority of instances.
- Score: 7.052596485478637
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many approaches for addressing Global Optimization problems typically rely on
relaxations of nonlinear constraints over specific mathematical primitives.
This is restricting in applications with constraints that are black-box,
implicit or consist of more general primitives. Trying to address such
limitations, Bertsimas and Ozturk (2023) proposed OCTHaGOn as a way of solving
black-box global optimization problems by approximating the nonlinear
constraints using hyperplane-based Decision-Trees and then using those trees to
construct a unified mixed integer optimization (MIO) approximation of the
original problem. We provide extensions to this approach, by (i) approximating
the original problem using other MIO-representable ML models besides Decision
Trees, such as Gradient Boosted Trees, Multi Layer Perceptrons and Suport
Vector Machines, (ii) proposing adaptive sampling procedures for more accurate
machine learning-based constraint approximations, (iii) utilizing robust
optimization to account for the uncertainty of the sample-dependent training of
the ML models, and (iv) leveraging a family of relaxations to address the
infeasibilities of the final MIO approximation. We then test the enhanced
framework in 81 Global Optimization instances. We show improvements in solution
feasibility and optimality in the majority of instances. We also compare
against BARON, showing improved optimality gaps or solution times in 11
instances.
Related papers
- End-to-End Learning for Fair Multiobjective Optimization Under
Uncertainty [55.04219793298687]
The Predict-Then-Forecast (PtO) paradigm in machine learning aims to maximize downstream decision quality.
This paper extends the PtO methodology to optimization problems with nondifferentiable Ordered Weighted Averaging (OWA) objectives.
It shows how optimization of OWA functions can be effectively integrated with parametric prediction for fair and robust optimization under uncertainty.
arXiv Detail & Related papers (2024-02-12T16:33:35Z) - Symmetric Tensor Networks for Generative Modeling and Constrained
Combinatorial Optimization [72.41480594026815]
Constrained optimization problems abound in industry, from portfolio optimization to logistics.
One of the major roadblocks in solving these problems is the presence of non-trivial hard constraints which limit the valid search space.
In this work, we encode arbitrary integer-valued equality constraints of the form Ax=b, directly into U(1) symmetric networks (TNs) and leverage their applicability as quantum-inspired generative models.
arXiv Detail & Related papers (2022-11-16T18:59:54Z) - Polynomial Optimization: Enhancing RLT relaxations with Conic
Constraints [0.0]
Conic optimization has emerged as a powerful tool for designing tractable and guaranteed algorithms for non-scale problems.
We investigate the strengthening of the RLT relaxations of optimization problems through the addition of nine different types of constraints.
We show how to design these variants and their performance with respect to each other and with respect to the standard RLT relaxations.
arXiv Detail & Related papers (2022-08-11T02:13:04Z) - Tree ensemble kernels for Bayesian optimization with known constraints
over mixed-feature spaces [54.58348769621782]
Tree ensembles can be well-suited for black-box optimization tasks such as algorithm tuning and neural architecture search.
Two well-known challenges in using tree ensembles for black-box optimization are (i) effectively quantifying model uncertainty for exploration and (ii) optimizing over the piece-wise constant acquisition function.
Our framework performs as well as state-of-the-art methods for unconstrained black-box optimization over continuous/discrete features and outperforms competing methods for problems combining mixed-variable feature spaces and known input constraints.
arXiv Detail & Related papers (2022-07-02T16:59:37Z) - Predict and Optimize: Through the Lens of Learning to Rank [9.434400627011108]
We show the noise contrastive estimation can be considered a case of learning to rank the solution cache.
We also develop pairwise and listwise ranking loss functions, which can be differentiated in closed form without the need of solving the optimization problem.
arXiv Detail & Related papers (2021-12-07T10:11:44Z) - A Surrogate Objective Framework for Prediction+Optimization with Soft
Constraints [29.962390392493507]
Decision-focused prediction approaches, such as SPO+ and direct optimization, have been proposed to fill this gap.
This paper proposes a novel analytically differentiable surrogate objective framework for real-world linear and semi-definite negative quadratic programming problems.
arXiv Detail & Related papers (2021-11-22T17:09:57Z) - Divide and Learn: A Divide and Conquer Approach for Predict+Optimize [50.03608569227359]
The predict+optimize problem combines machine learning ofproblem coefficients with a optimization prob-lem that uses the predicted coefficients.
We show how to directlyexpress the loss of the optimization problem in terms of thepredicted coefficients as a piece-wise linear function.
We propose a novel divide and algorithm to tackle optimization problems without this restriction and predict itscoefficients using the optimization loss.
arXiv Detail & Related papers (2020-12-04T00:26:56Z) - Mixed-Projection Conic Optimization: A New Paradigm for Modeling Rank
Constraints [3.179831861897336]
We provide a framework for solving low-rank optimization problems to certifiable optimality.
Our framework also provides near-optimal solutions through rounding and local search techniques.
arXiv Detail & Related papers (2020-09-22T08:59:06Z) - Extracting Optimal Solution Manifolds using Constrained Neural
Optimization [6.800113407368289]
Constrained Optimization solution algorithms are restricted to point based solutions.
We present an approach for extracting optimal sets as approximate, where unmodified non-informed constraints are defined.
arXiv Detail & Related papers (2020-09-13T15:37:44Z) - Generalized and Scalable Optimal Sparse Decision Trees [56.35541305670828]
We present techniques that produce optimal decision trees over a variety of objectives.
We also introduce a scalable algorithm that produces provably optimal results in the presence of continuous variables.
arXiv Detail & Related papers (2020-06-15T19:00:11Z) - Global Optimization of Gaussian processes [52.77024349608834]
We propose a reduced-space formulation with trained Gaussian processes trained on few data points.
The approach also leads to significantly smaller and computationally cheaper sub solver for lower bounding.
In total, we reduce time convergence by orders of orders of the proposed method.
arXiv Detail & Related papers (2020-05-21T20:59:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.