Global and Preference-based Optimization with Mixed Variables using Piecewise Affine Surrogates
- URL: http://arxiv.org/abs/2302.04686v4
- Date: Wed, 11 Dec 2024 12:06:10 GMT
- Title: Global and Preference-based Optimization with Mixed Variables using Piecewise Affine Surrogates
- Authors: Mengjia Zhu, Alberto Bemporad,
- Abstract summary: This paper proposes a novel surrogate-based global optimization algorithm to solve linearly constrained mixed-variable problems.
The proposed approach is based on constructing a piecewise affine surrogate of the objective function over feasible samples.
The two algorithms are evaluated on several unconstrained and constrained mixed-variable benchmark problems.
- Score: 0.6083861980670925
- License:
- Abstract: Optimization problems involving mixed variables (i.e., variables of numerical and categorical nature) can be challenging to solve, especially in the presence of mixed-variable constraints. Moreover, when the objective function is the result of a complicated simulation or experiment, it may be expensive-to-evaluate. This paper proposes a novel surrogate-based global optimization algorithm to solve linearly constrained mixed-variable problems up to medium size (around 100 variables after encoding). The proposed approach is based on constructing a piecewise affine surrogate of the objective function over feasible samples. We assume the objective function is black-box and expensive-to-evaluate, while the linear constraints are quantifiable, unrelaxable, a priori known, and are cheap to evaluate. We introduce two types of exploration functions to efficiently search the feasible domain via mixed-integer linear programming solvers. We also provide a preference-based version of the algorithm designed for situations where only pairwise comparisons between samples can be acquired, while the underlying objective function to minimize remains unquantified. The two algorithms are evaluated on several unconstrained and constrained mixed-variable benchmark problems. The results show that, within a small number of required experiments/simulations, the proposed algorithms can often achieve better or comparable results than other existing methods.
Related papers
- Accelerated zero-order SGD under high-order smoothness and overparameterized regime [79.85163929026146]
We present a novel gradient-free algorithm to solve convex optimization problems.
Such problems are encountered in medicine, physics, and machine learning.
We provide convergence guarantees for the proposed algorithm under both types of noise.
arXiv Detail & Related papers (2024-11-21T10:26:17Z) - Experience in Engineering Complex Systems: Active Preference Learning
with Multiple Outcomes and Certainty Levels [1.5257326975704795]
Black-box optimization refers to the problem whose objective function and/or constraint sets are either unknown, inaccessible, or non-existent.
The algorithm so-called Active Preference Learning has been developed to exploit this specific information.
Our approach aims to extend the algorithm in such a way that can exploit further information effectively.
arXiv Detail & Related papers (2023-02-27T15:55:37Z) - Multi-block-Single-probe Variance Reduced Estimator for Coupled
Compositional Optimization [49.58290066287418]
We propose a novel method named Multi-block-probe Variance Reduced (MSVR) to alleviate the complexity of compositional problems.
Our results improve upon prior ones in several aspects, including the order of sample complexities and dependence on strongity.
arXiv Detail & Related papers (2022-07-18T12:03:26Z) - Quantum Goemans-Williamson Algorithm with the Hadamard Test and
Approximate Amplitude Constraints [62.72309460291971]
We introduce a variational quantum algorithm for Goemans-Williamson algorithm that uses only $n+1$ qubits.
Efficient optimization is achieved by encoding the objective matrix as a properly parameterized unitary conditioned on an auxilary qubit.
We demonstrate the effectiveness of our protocol by devising an efficient quantum implementation of the Goemans-Williamson algorithm for various NP-hard problems.
arXiv Detail & Related papers (2022-06-30T03:15:23Z) - A framework for bilevel optimization that enables stochastic and global variance reduction algorithms [21.67411847762289]
Bilevel optimization is a problem of minimizing a value function which involves the arg-minimum of another function.
We introduce a novel framework, in which the solution of the inner problem, the solution of the linear system, and the main variable evolve at the same time.
We demonstrate that SABA, an adaptation of the celebrated SAGA algorithm in our framework, has $O(frac1T)$ convergence rate.
arXiv Detail & Related papers (2022-01-31T18:17:25Z) - Learning Proximal Operators to Discover Multiple Optima [66.98045013486794]
We present an end-to-end method to learn the proximal operator across non-family problems.
We show that for weakly-ized objectives and under mild conditions, the method converges globally.
arXiv Detail & Related papers (2022-01-28T05:53:28Z) - A Granular Sieving Algorithm for Deterministic Global Optimization [6.01919376499018]
A gradient-free deterministic method is developed to solve global optimization problems for Lipschitz continuous functions.
The method can be regarded as granular sieving with synchronous analysis in both the domain and range of the objective function.
arXiv Detail & Related papers (2021-07-14T10:03:03Z) - Local policy search with Bayesian optimization [73.0364959221845]
Reinforcement learning aims to find an optimal policy by interaction with an environment.
Policy gradients for local search are often obtained from random perturbations.
We develop an algorithm utilizing a probabilistic model of the objective function and its gradient.
arXiv Detail & Related papers (2021-06-22T16:07:02Z) - Slowly Varying Regression under Sparsity [5.22980614912553]
We present the framework of slowly hyper regression under sparsity, allowing regression models to exhibit slow and sparse variations.
We suggest a procedure that reformulates as a binary convex algorithm.
We show that the resulting model outperforms competing formulations in comparable times across various datasets.
arXiv Detail & Related papers (2021-02-22T04:51:44Z) - Aligning Partially Overlapping Point Sets: an Inner Approximation
Algorithm [80.15123031136564]
We propose a robust method to align point sets where there is no prior information about the value of the transformation.
Our algorithm does not need regularization on transformation, and thus can handle the situation where there is no prior information about the values of the transformations.
Experimental results demonstrate the better robustness of the proposed method over state-of-the-art algorithms.
arXiv Detail & Related papers (2020-07-05T15:23:33Z) - Hybrid Variance-Reduced SGD Algorithms For Nonconvex-Concave Minimax
Problems [26.24895953952318]
We develop an algorithm to solve a class of non-gence minimax problems.
They can also work with both single or two mini-batch derivatives.
arXiv Detail & Related papers (2020-06-27T03:05:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.