Overcoming Brittleness in Pareto-Optimal Learning-Augmented Algorithms
- URL: http://arxiv.org/abs/2408.04122v1
- Date: Wed, 7 Aug 2024 23:15:21 GMT
- Title: Overcoming Brittleness in Pareto-Optimal Learning-Augmented Algorithms
- Authors: Spyros Angelopoulos, Christoph Dürr, Alex Elenter, Yanni Lefki,
- Abstract summary: We propose a new framework in which the smoothness in the performance of the algorithm is enforced by means of a user-specified profile.
We apply this new approach to a well-studied online problem, namely the one-way trading problem.
- Score: 6.131022957085439
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The study of online algorithms with machine-learned predictions has gained considerable prominence in recent years. One of the common objectives in the design and analysis of such algorithms is to attain (Pareto) optimal tradeoffs between the consistency of the algorithm, i.e., its performance assuming perfect predictions, and its robustness, i.e., the performance of the algorithm under adversarial predictions. In this work, we demonstrate that this optimization criterion can be extremely brittle, in that the performance of Pareto-optimal algorithms may degrade dramatically even in the presence of imperceptive prediction error. To remedy this drawback, we propose a new framework in which the smoothness in the performance of the algorithm is enforced by means of a user-specified profile. This allows us to regulate the performance of the algorithm as a function of the prediction error, while simultaneously maintaining the analytical notion of consistency/robustness tradeoffs, adapted to the profile setting. We apply this new approach to a well-studied online problem, namely the one-way trading problem. For this problem, we further address another limitation of the state-of-the-art Pareto-optimal algorithms, namely the fact that they are tailored to worst-case, and extremely pessimistic inputs. We propose a new Pareto-optimal algorithm that leverages any deviation from the worst-case input to its benefit, and introduce a new metric that allows us to compare any two Pareto-optimal algorithms via a dominance relation.
Related papers
- Decision-Theoretic Approaches in Learning-Augmented Algorithms [6.697702130929691]
We introduce approaches based on both deterministic measures such as distance-based evaluation.
We balance the trade-off between the algorithm's performance and the risk associated with the imperfect oracle.
These approaches help us quantify the algorithmic performance across the entire spectrum of prediction error.
arXiv Detail & Related papers (2025-01-29T15:16:27Z) - On Tradeoffs in Learning-Augmented Algorithms [17.387787159892287]
Learning-augmented algorithms must exhibit three key properties: consistency, robustness, and smoothness.
This paper demonstrates that certain problems involve multiple tradeoffs between consistency, robustness, smoothness, and average performance.
arXiv Detail & Related papers (2025-01-22T10:12:18Z) - Approximation Algorithms for Combinatorial Optimization with Predictions [3.7235228254732524]
We use predictions to improve over approximation guarantees of classic algorithms.
Our algorithms produce optimal solutions when provided with perfect predictions.
Although we show our approach to be optimal for this class of problems as a whole, there is a potential for exploiting specific structural properties of individual problems to obtain improved bounds.
arXiv Detail & Related papers (2024-11-25T17:31:34Z) - Deterministic Trajectory Optimization through Probabilistic Optimal Control [3.2771631221674333]
We discuss two algorithms tailored to discrete-time deterministic finite-horizon nonlinear optimal control problems.
Both algorithms can be derived from an emerging theoretical paradigm that we refer to as probabilistic optimal control.
The main advantage of the algorithms is an improved balance between exploration and exploitation over the iterations.
arXiv Detail & Related papers (2024-07-18T09:17:47Z) - Learning Predictions for Algorithms with Predictions [49.341241064279714]
We introduce a general design approach for algorithms that learn predictors.
We apply techniques from online learning to learn against adversarial instances, tune robustness-consistency trade-offs, and obtain new statistical guarantees.
We demonstrate the effectiveness of our approach at deriving learning algorithms by analyzing methods for bipartite matching, page migration, ski-rental, and job scheduling.
arXiv Detail & Related papers (2022-02-18T17:25:43Z) - Amortized Implicit Differentiation for Stochastic Bilevel Optimization [53.12363770169761]
We study a class of algorithms for solving bilevel optimization problems in both deterministic and deterministic settings.
We exploit a warm-start strategy to amortize the estimation of the exact gradient.
By using this framework, our analysis shows these algorithms to match the computational complexity of methods that have access to an unbiased estimate of the gradient.
arXiv Detail & Related papers (2021-11-29T15:10:09Z) - On the Optimality of Batch Policy Optimization Algorithms [106.89498352537682]
Batch policy optimization considers leveraging existing data for policy construction before interacting with an environment.
We show that any confidence-adjusted index algorithm is minimax optimal, whether it be optimistic, pessimistic or neutral.
We introduce a new weighted-minimax criterion that considers the inherent difficulty of optimal value prediction.
arXiv Detail & Related papers (2021-04-06T05:23:20Z) - Double Coverage with Machine-Learned Advice [100.23487145400833]
We study the fundamental online $k$-server problem in a learning-augmented setting.
We show that our algorithm achieves for any k an almost optimal consistency-robustness tradeoff.
arXiv Detail & Related papers (2021-03-02T11:04:33Z) - An Asymptotically Optimal Primal-Dual Incremental Algorithm for
Contextual Linear Bandits [129.1029690825929]
We introduce a novel algorithm improving over the state-of-the-art along multiple dimensions.
We establish minimax optimality for any learning horizon in the special case of non-contextual linear bandits.
arXiv Detail & Related papers (2020-10-23T09:12:47Z) - Optimal Robustness-Consistency Trade-offs for Learning-Augmented Online
Algorithms [85.97516436641533]
We study the problem of improving the performance of online algorithms by incorporating machine-learned predictions.
The goal is to design algorithms that are both consistent and robust.
We provide the first set of non-trivial lower bounds for competitive analysis using machine-learned predictions.
arXiv Detail & Related papers (2020-10-22T04:51:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.