On Optimal Consistency-Robustness Trade-Off for Learning-Augmented
Multi-Option Ski Rental
- URL: http://arxiv.org/abs/2312.02547v1
- Date: Tue, 5 Dec 2023 07:33:51 GMT
- Title: On Optimal Consistency-Robustness Trade-Off for Learning-Augmented
Multi-Option Ski Rental
- Authors: Yongho Shin, Changyeol Lee, Hyung-Chan An
- Abstract summary: A learning-augmented multi-option ski rental problem generalizes the classical ski rental problem in two ways.
For randomized algorithms, we show the first nontrivial lower bound on the consistency-robustness trade-off.
Our algorithm matches our lower bound on robustness within a factor of e/2 when the consistency is at most 1.086.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The learning-augmented multi-option ski rental problem generalizes the
classical ski rental problem in two ways: the algorithm is provided with a
prediction on the number of days we can ski, and the ski rental options now
come with a variety of rental periods and prices to choose from, unlike the
classical two-option setting. Subsequent to the initial study of the
multi-option ski rental problem (without learning augmentation) due to Zhang,
Poon, and Xu, significant progress has been made for this problem recently in
particular. The problem is very well understood when we relinquish one of the
two generalizations -- for the learning-augmented classical ski rental problem,
algorithms giving best-possible trade-off between consistency and robustness
exist; for the multi-option ski rental problem without learning augmentation,
deterministic/randomized algorithms giving the best-possible competitiveness
have been found. However, in presence of both generalizations, there remained a
huge gap between the algorithmic and impossibility results. In fact, for
randomized algorithms, we did not have any nontrivial lower bounds on the
consistency-robustness trade-off before.
This paper bridges this gap for both deterministic and randomized algorithms.
For deterministic algorithms, we present a best-possible algorithm that
completely matches the known lower bound. For randomized algorithms, we show
the first nontrivial lower bound on the consistency-robustness trade-off, and
also present an improved randomized algorithm. Our algorithm matches our lower
bound on robustness within a factor of e/2 when the consistency is at most
1.086.
Related papers
- Learning-Augmented Algorithms for the Bahncard Problem [12.852098444482426]
We study learning-augmented algorithms for the Bahncard problem.
PFSUM incorporates both history and short-term future to improve online decision making.
arXiv Detail & Related papers (2024-10-20T02:55:15Z) - Semi-Bandit Learning for Monotone Stochastic Optimization [20.776114616154242]
We provide a generic online learning algorithm for a class of "monotone" problems.
Our framework applies to several fundamental problems in optimization such as prophet, Pandora's box knapsack, inequality matchings and submodular optimization.
arXiv Detail & Related papers (2023-12-24T07:46:37Z) - Gradient-free optimization of highly smooth functions: improved analysis
and a new algorithm [87.22224691317766]
This work studies problems with zero-order noisy oracle information under the assumption that the objective function is highly smooth.
We consider two kinds of zero-order projected gradient descent algorithms.
arXiv Detail & Related papers (2023-06-03T17:05:13Z) - Improved Learning-Augmented Algorithms for the Multi-Option Ski Rental
Problem via Best-Possible Competitive Analysis [0.1529342790344802]
We present improved learning-augmented algorithms for the multi-option ski rental problem.
Our algorithm is based on a new, provably best-possible randomized competitive algorithm for the problem.
arXiv Detail & Related papers (2023-02-14T05:05:03Z) - HARRIS: Hybrid Ranking and Regression Forests for Algorithm Selection [75.84584400866254]
We propose a new algorithm selector leveraging special forests, combining the strengths of both approaches while alleviating their weaknesses.
HARRIS' decisions are based on a forest model, whose trees are created based on optimized on a hybrid ranking and regression loss function.
arXiv Detail & Related papers (2022-10-31T14:06:11Z) - Machine Learning for Online Algorithm Selection under Censored Feedback [71.6879432974126]
In online algorithm selection (OAS), instances of an algorithmic problem class are presented to an agent one after another, and the agent has to quickly select a presumably best algorithm from a fixed set of candidate algorithms.
For decision problems such as satisfiability (SAT), quality typically refers to the algorithm's runtime.
In this work, we revisit multi-armed bandit algorithms for OAS and discuss their capability of dealing with the problem.
We adapt them towards runtime-oriented losses, allowing for partially censored data while keeping a space- and time-complexity independent of the time horizon.
arXiv Detail & Related papers (2021-09-13T18:10:52Z) - Double Coverage with Machine-Learned Advice [100.23487145400833]
We study the fundamental online $k$-server problem in a learning-augmented setting.
We show that our algorithm achieves for any k an almost optimal consistency-robustness tradeoff.
arXiv Detail & Related papers (2021-03-02T11:04:33Z) - Run2Survive: A Decision-theoretic Approach to Algorithm Selection based
on Survival Analysis [75.64261155172856]
survival analysis (SA) naturally supports censored data and offers appropriate ways to use such data for learning distributional models of algorithm runtime.
We leverage such models as a basis of a sophisticated decision-theoretic approach to algorithm selection, which we dub Run2Survive.
In an extensive experimental study with the standard benchmark ASlib, our approach is shown to be highly competitive and in many cases even superior to state-of-the-art AS approaches.
arXiv Detail & Related papers (2020-07-06T15:20:17Z) - Learning to Accelerate Heuristic Searching for Large-Scale Maximum
Weighted b-Matching Problems in Online Advertising [51.97494906131859]
Bipartite b-matching is fundamental in algorithm design, and has been widely applied into economic markets, labor markets, etc.
Existing exact and approximate algorithms usually fail in such settings due to either requiring intolerable running time or too much computation resource.
We propose textttNeuSearcher which leverages the knowledge learned from previously instances to solve new problem instances.
arXiv Detail & Related papers (2020-05-09T02:48:23Z) - Learning Sparse Classifiers: Continuous and Mixed Integer Optimization
Perspectives [10.291482850329892]
Mixed integer programming (MIP) can be used to solve (to optimality) $ell_0$-regularized regression problems.
We propose two classes of scalable algorithms: an exact algorithm that can handlepapprox 50,000$ features in a few minutes, and approximate algorithms that can address instances with $papprox6$.
In addition, we present new estimation error bounds for $ell$-regularizeds.
arXiv Detail & Related papers (2020-01-17T18:47:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.