Pareto-Optimality, Smoothness, and Stochasticity in Learning-Augmented One-Max-Search
- URL: http://arxiv.org/abs/2502.05720v1
- Date: Sat, 08 Feb 2025 23:25:52 GMT
- Title: Pareto-Optimality, Smoothness, and Stochasticity in Learning-Augmented One-Max-Search
- Authors: Ziyad Benomar, Lorenzo Croissant, Vianney Perchet, Spyros Angelopoulos,
- Abstract summary: One-max search is a classic problem in online decision-making.
We show how to leverage the smoothness obtained to provide an analysis of one-max search in learning-augmented settings.
- Score: 20.536739487240318
- License:
- Abstract: One-max search is a classic problem in online decision-making, in which a trader acts on a sequence of revealed prices and accepts one of them irrevocably to maximise its profit. The problem has been studied both in probabilistic and in worst-case settings, notably through competitive analysis, and more recently in learning-augmented settings in which the trader has access to a prediction on the sequence. However, existing approaches either lack smoothness, or do not achieve optimal worst-case guarantees: they do not attain the best possible trade-off between the consistency and the robustness of the algorithm. We close this gap by presenting the first algorithm that simultaneously achieves both of these important objectives. Furthermore, we show how to leverage the obtained smoothness to provide an analysis of one-max search in stochastic learning-augmented settings which capture randomness in both the observed prices and the prediction.
Related papers
- Effectively Leveraging Momentum Terms in Stochastic Line Search Frameworks for Fast Optimization of Finite-Sum Problems [0.5156484100374059]
We explore the relationship between recent line search approaches for deep optimization in the overparametrized regime and momentum directions.
We introduce algorithmic that exploits a mix of data persistency, conjugateient type rules for the definition of the momentum parameter.
The resulting algorithm is empirically shown to outperform other popular methods.
arXiv Detail & Related papers (2024-11-11T16:26:33Z) - Overcoming Brittleness in Pareto-Optimal Learning-Augmented Algorithms [6.131022957085439]
We propose a new framework in which the smoothness in the performance of the algorithm is enforced by means of a user-specified profile.
We apply this new approach to a well-studied online problem, namely the one-way trading problem.
arXiv Detail & Related papers (2024-08-07T23:15:21Z) - Beyond Expectations: Learning with Stochastic Dominance Made Practical [88.06211893690964]
dominance models risk-averse preferences for decision making with uncertain outcomes.
Despite theoretically appealing, the application of dominance in machine learning has been scarce.
We first generalize the dominance concept to enable feasible comparisons between any arbitrary pair of random variables.
We then develop a simple and efficient approach for finding the optimal solution in terms of dominance.
arXiv Detail & Related papers (2024-02-05T03:21:23Z) - Bi-objective Ranking and Selection Using Stochastic Kriging [0.0]
We consider bi-objective ranking and selection problems in which the two objective outcomes have been observed with uncertainty.
We propose a novel Bayesian bi-objective ranking and selection method that sequentially allocates extra samples to competitive solutions.
Experimental results show that the proposed method outperforms the standard allocation method, as well as a well-known state-of-the-art algorithm.
arXiv Detail & Related papers (2022-09-05T23:51:07Z) - Online Search With Best-Price and Query-Based Predictions [2.3204178451683264]
We study learning-augmented algorithms in which there is a potentially erroneous prediction concerning the input.
We provide experimental results on data obtained from stock exchange markets.
arXiv Detail & Related papers (2021-12-02T20:18:37Z) - Learning Equilibria in Matching Markets from Bandit Feedback [139.29934476625488]
We develop a framework and algorithms for learning stable market outcomes under uncertainty.
Our work takes a first step toward elucidating when and how stable matchings arise in large, data-driven marketplaces.
arXiv Detail & Related papers (2021-08-19T17:59:28Z) - High Probability Complexity Bounds for Non-Smooth Stochastic Optimization with Heavy-Tailed Noise [51.31435087414348]
It is essential to theoretically guarantee that algorithms provide small objective residual with high probability.
Existing methods for non-smooth convex optimization have complexity bounds with dependence on confidence level.
We propose novel stepsize rules for two methods with gradient clipping.
arXiv Detail & Related papers (2021-06-10T17:54:21Z) - Navigating to the Best Policy in Markov Decision Processes [68.8204255655161]
We investigate the active pure exploration problem in Markov Decision Processes.
Agent sequentially selects actions and, from the resulting system trajectory, aims at the best as fast as possible.
arXiv Detail & Related papers (2021-06-05T09:16:28Z) - Optimal Robustness-Consistency Trade-offs for Learning-Augmented Online
Algorithms [85.97516436641533]
We study the problem of improving the performance of online algorithms by incorporating machine-learned predictions.
The goal is to design algorithms that are both consistent and robust.
We provide the first set of non-trivial lower bounds for competitive analysis using machine-learned predictions.
arXiv Detail & Related papers (2020-10-22T04:51:01Z) - Provable tradeoffs in adversarially robust classification [96.48180210364893]
We develop and leverage new tools, including recent breakthroughs from probability theory on robust isoperimetry.
Our results reveal fundamental tradeoffs between standard and robust accuracy that grow when data is imbalanced.
arXiv Detail & Related papers (2020-06-09T09:58:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.