Formalizing Preferences Over Runtime Distributions
- URL: http://arxiv.org/abs/2205.13028v2
- Date: Fri, 2 Jun 2023 22:51:09 GMT
- Title: Formalizing Preferences Over Runtime Distributions
- Authors: Devon R. Graham, Kevin Leyton-Brown, Tim Roughgarden
- Abstract summary: We use a utility-theoretic approach to characterize the scoring functions that describe preferences over algorithms.
We show how to leverage a maximum-entropy approach for modeling underspecified captime distributions.
- Score: 25.899669128438322
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: When trying to solve a computational problem, we are often faced with a
choice between algorithms that are guaranteed to return the right answer but
differ in their runtime distributions (e.g., SAT solvers, sorting algorithms).
This paper aims to lay theoretical foundations for such choices by formalizing
preferences over runtime distributions. It might seem that we should simply
prefer the algorithm that minimizes expected runtime. However, such preferences
would be driven by exactly how slow our algorithm is on bad inputs, whereas in
practice we are typically willing to cut off occasional, sufficiently long runs
before they finish. We propose a principled alternative, taking a
utility-theoretic approach to characterize the scoring functions that describe
preferences over algorithms. These functions depend on the way our value for
solving our problem decreases with time and on the distribution from which
captimes are drawn. We describe examples of realistic utility functions and
show how to leverage a maximum-entropy approach for modeling underspecified
captime distributions. Finally, we show how to efficiently estimate an
algorithm's expected utility from runtime samples.
Related papers
- Semi-Bandit Learning for Monotone Stochastic Optimization [20.776114616154242]
We provide a generic online learning algorithm for a class of "monotone" problems.
Our framework applies to several fundamental problems in optimization such as prophet, Pandora's box knapsack, inequality matchings and submodular optimization.
arXiv Detail & Related papers (2023-12-24T07:46:37Z) - Efficient distributed representations with linear-time attention scores normalization [3.8673630752805437]
We propose a linear-time approximation of the attention score normalization constants for embedding vectors with bounded norms.
The accuracy of our estimation formula surpasses competing kernel methods by even orders of magnitude.
The proposed algorithm is highly interpretable and easily adapted to an arbitrary embedding problem.
arXiv Detail & Related papers (2023-03-30T15:48:26Z) - Trajectory-based Algorithm Selection with Warm-starting [2.3823600586675724]
We study the quality and accuracy of performance regression and algorithm selection models in the scenario of predicting different algorithm performances.
We show promising performance of the trajectory-based per-run algorithm selection with warm-starting.
arXiv Detail & Related papers (2022-04-13T14:00:55Z) - Machine Learning for Online Algorithm Selection under Censored Feedback [71.6879432974126]
In online algorithm selection (OAS), instances of an algorithmic problem class are presented to an agent one after another, and the agent has to quickly select a presumably best algorithm from a fixed set of candidate algorithms.
For decision problems such as satisfiability (SAT), quality typically refers to the algorithm's runtime.
In this work, we revisit multi-armed bandit algorithms for OAS and discuss their capability of dealing with the problem.
We adapt them towards runtime-oriented losses, allowing for partially censored data while keeping a space- and time-complexity independent of the time horizon.
arXiv Detail & Related papers (2021-09-13T18:10:52Z) - Bayesian Algorithm Execution: Estimating Computable Properties of
Black-box Functions Using Mutual Information [78.78486761923855]
In many real world problems, we want to infer some property of an expensive black-box function f, given a budget of T function evaluations.
We present a procedure, InfoBAX, that sequentially chooses queries that maximize mutual information with respect to the algorithm's output.
On these problems, InfoBAX uses up to 500 times fewer queries to f than required by the original algorithm.
arXiv Detail & Related papers (2021-04-19T17:22:11Z) - Efficient Computation of Expectations under Spanning Tree Distributions [67.71280539312536]
We propose unified algorithms for the important cases of first-order expectations and second-order expectations in edge-factored, non-projective spanning-tree models.
Our algorithms exploit a fundamental connection between gradients and expectations, which allows us to derive efficient algorithms.
arXiv Detail & Related papers (2020-08-29T14:58:26Z) - Run2Survive: A Decision-theoretic Approach to Algorithm Selection based
on Survival Analysis [75.64261155172856]
survival analysis (SA) naturally supports censored data and offers appropriate ways to use such data for learning distributional models of algorithm runtime.
We leverage such models as a basis of a sophisticated decision-theoretic approach to algorithm selection, which we dub Run2Survive.
In an extensive experimental study with the standard benchmark ASlib, our approach is shown to be highly competitive and in many cases even superior to state-of-the-art AS approaches.
arXiv Detail & Related papers (2020-07-06T15:20:17Z) - Private Stochastic Convex Optimization: Optimal Rates in Linear Time [74.47681868973598]
We study the problem of minimizing the population loss given i.i.d. samples from a distribution over convex loss functions.
A recent work of Bassily et al. has established the optimal bound on the excess population loss achievable given $n$ samples.
We describe two new techniques for deriving convex optimization algorithms both achieving the optimal bound on excess loss and using $O(minn, n2/d)$ gradient computations.
arXiv Detail & Related papers (2020-05-10T19:52:03Z) - Time-varying Gaussian Process Bandit Optimization with Non-constant
Evaluation Time [93.6788993843846]
We propose a novel time-varying Bayesian optimization algorithm that can effectively handle the non-constant evaluation time.
Our bound elucidates that a pattern of the evaluation time sequence can hugely affect the difficulty of the problem.
arXiv Detail & Related papers (2020-03-10T13:28:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.