Selecting the Best Optimizing System
- URL: http://arxiv.org/abs/2201.03065v1
- Date: Sun, 9 Jan 2022 18:26:40 GMT
- Title: Selecting the Best Optimizing System
- Authors: Nian Si, Zeyu Zheng
- Abstract summary: We formulate selecting the best optimizing system (SBOS) problems and provide solutions for those problems.
An SBOS problem compares different systems based on their expected performances under their own optimally chosen decision to select the best.
We design easy-to-implement algorithms that adaptively chooses a system and a choice of decision to evaluate the noisy system performance.
- Score: 16.16839471915128
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We formulate selecting the best optimizing system (SBOS) problems and provide
solutions for those problems. In an SBOS problem, a finite number of systems
are contenders. Inside each system, a continuous decision variable affects the
system's expected performance. An SBOS problem compares different systems based
on their expected performances under their own optimally chosen decision to
select the best, without advance knowledge of expected performances of the
systems nor the optimizing decision inside each system. We design
easy-to-implement algorithms that adaptively chooses a system and a choice of
decision to evaluate the noisy system performance, sequentially eliminates
inferior systems, and eventually recommends a system as the best after spending
a user-specified budget. The proposed algorithms integrate the stochastic
gradient descent method and the sequential elimination method to simultaneously
exploit the structure inside each system and make comparisons across systems.
For the proposed algorithms, we prove exponential rates of convergence to zero
for the probability of false selection, as the budget grows to infinity. We
conduct three numerical examples that represent three practical cases of SBOS
problems. Our proposed algorithms demonstrate consistent and stronger
performances in terms of the probability of false selection over benchmark
algorithms under a range of problem settings and sampling budgets.
Related papers
- Stop Relying on No-Choice and Do not Repeat the Moves: Optimal,
Efficient and Practical Algorithms for Assortment Optimization [38.57171985309975]
We develop efficient algorithms for the problem of regret in assortment selection with emphPlackett Luce (PL) based user choices.
Our methods are practical, provably optimal, and devoid of the aforementioned limitations of the existing methods.
arXiv Detail & Related papers (2024-02-29T07:17:04Z) - Analyzing and Enhancing the Backward-Pass Convergence of Unrolled
Optimization [50.38518771642365]
The integration of constrained optimization models as components in deep networks has led to promising advances on many specialized learning tasks.
A central challenge in this setting is backpropagation through the solution of an optimization problem, which often lacks a closed form.
This paper provides theoretical insights into the backward pass of unrolled optimization, showing that it is equivalent to the solution of a linear system by a particular iterative method.
A system called Folded Optimization is proposed to construct more efficient backpropagation rules from unrolled solver implementations.
arXiv Detail & Related papers (2023-12-28T23:15:18Z) - Accelerating Cutting-Plane Algorithms via Reinforcement Learning
Surrogates [49.84541884653309]
A current standard approach to solving convex discrete optimization problems is the use of cutting-plane algorithms.
Despite the existence of a number of general-purpose cut-generating algorithms, large-scale discrete optimization problems continue to suffer from intractability.
We propose a method for accelerating cutting-plane algorithms via reinforcement learning.
arXiv Detail & Related papers (2023-07-17T20:11:56Z) - Convergence Rate Analysis for Optimal Computing Budget Allocation
Algorithms [1.713291434132985]
Ordinal optimization (OO) is a widely-studied technique for optimizing discrete-event dynamic systems.
A well-known method in OO is the optimal computing budget allocation (OCBA)
In this paper, we investigate two popular OCBA algorithms.
arXiv Detail & Related papers (2022-11-27T04:55:40Z) - Socio-cognitive Optimization of Time-delay Control Problems using
Evolutionary Metaheuristics [89.24951036534168]
Metaheuristics are universal optimization algorithms which should be used for solving difficult problems, unsolvable by classic approaches.
In this paper we aim at constructing novel socio-cognitive metaheuristic based on castes, and apply several versions of this algorithm to optimization of time-delay system model.
arXiv Detail & Related papers (2022-10-23T22:21:10Z) - Generalizing Bayesian Optimization with Decision-theoretic Entropies [102.82152945324381]
We consider a generalization of Shannon entropy from work in statistical decision theory.
We first show that special cases of this entropy lead to popular acquisition functions used in BO procedures.
We then show how alternative choices for the loss yield a flexible family of acquisition functions.
arXiv Detail & Related papers (2022-10-04T04:43:58Z) - Dynamic Multi-objective Ensemble of Acquisition Functions in Batch
Bayesian Optimization [1.1602089225841632]
The acquisition function plays a crucial role in the optimization process.
Three acquisition functions are dynamically selected from a set based on their current and historical performance.
Using an evolutionary multi-objective algorithm to optimize such a MOP, a set of non-dominated solutions can be obtained.
arXiv Detail & Related papers (2022-06-22T14:09:18Z) - Benchmarking Feature-based Algorithm Selection Systems for Black-box
Numerical Optimization [0.0]
Feature-based algorithm selection aims to automatically find the best one from a portfolio of optimization algorithms on an unseen problem.
This paper analyzes algorithm selection systems on the 24 noiseless black-box optimization benchmarking functions.
We show that the performance of algorithm selection systems can be significantly improved by using sequential least squares programming as a pre-solver.
arXiv Detail & Related papers (2021-09-17T07:17:40Z) - Learning with Differentiable Perturbed Optimizers [54.351317101356614]
We propose a systematic method to transform operations into operations that are differentiable and never locally constant.
Our approach relies on perturbeds, and can be used readily together with existing solvers.
We show how this framework can be connected to a family of losses developed in structured prediction, and give theoretical guarantees for their use in learning tasks.
arXiv Detail & Related papers (2020-02-20T11:11:32Z) - Extreme Algorithm Selection With Dyadic Feature Representation [78.13985819417974]
We propose the setting of extreme algorithm selection (XAS) where we consider fixed sets of thousands of candidate algorithms.
We assess the applicability of state-of-the-art AS techniques to the XAS setting and propose approaches leveraging a dyadic feature representation.
arXiv Detail & Related papers (2020-01-29T09:40:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.