Optimal Parameter-free Online Learning with Switching Cost
- URL: http://arxiv.org/abs/2205.06846v1
- Date: Fri, 13 May 2022 18:44:27 GMT
- Title: Optimal Parameter-free Online Learning with Switching Cost
- Authors: Zhiyu Zhang, Ashok Cutkosky, Ioannis Ch. Paschalidis
- Abstract summary: -freeness in online learning refers to the adaptivity of an algorithm with respect to the optimal decision in hindsight.
In this paper, we design such algorithms in the presence of switching cost - the latter penalizes the optimistic updates required by parameter-freeness.
We propose a simple yet powerful algorithm for Online Linear Optimization (OLO) with switching cost, which improves the existing suboptimal regret bound [ZCP22a] to the optimal rate.
- Score: 47.415099037249085
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Parameter-freeness in online learning refers to the adaptivity of an
algorithm with respect to the optimal decision in hindsight. In this paper, we
design such algorithms in the presence of switching cost - the latter penalizes
the optimistic updates required by parameter-freeness, leading to a delicate
design trade-off. Based on a novel dual space scaling strategy, we propose a
simple yet powerful algorithm for Online Linear Optimization (OLO) with
switching cost, which improves the existing suboptimal regret bound [ZCP22a] to
the optimal rate. The obtained benefit is extended to the expert setting, and
the practicality of our algorithm is demonstrated through a sequential
investment task.
Related papers
- General framework for online-to-nonconvex conversion: Schedule-free SGD is also effective for nonconvex optimization [40.254487017289975]
This work investigates the effectiveness schedule-free methods, developed by A. Defazio et al.
Specifically, we show that schedule-free iteration for nonsmooth SGD non optimization problems.
arXiv Detail & Related papers (2024-11-11T15:25:48Z) - End-to-End Learning for Fair Multiobjective Optimization Under
Uncertainty [55.04219793298687]
The Predict-Then-Forecast (PtO) paradigm in machine learning aims to maximize downstream decision quality.
This paper extends the PtO methodology to optimization problems with nondifferentiable Ordered Weighted Averaging (OWA) objectives.
It shows how optimization of OWA functions can be effectively integrated with parametric prediction for fair and robust optimization under uncertainty.
arXiv Detail & Related papers (2024-02-12T16:33:35Z) - Analyzing and Enhancing the Backward-Pass Convergence of Unrolled
Optimization [50.38518771642365]
The integration of constrained optimization models as components in deep networks has led to promising advances on many specialized learning tasks.
A central challenge in this setting is backpropagation through the solution of an optimization problem, which often lacks a closed form.
This paper provides theoretical insights into the backward pass of unrolled optimization, showing that it is equivalent to the solution of a linear system by a particular iterative method.
A system called Folded Optimization is proposed to construct more efficient backpropagation rules from unrolled solver implementations.
arXiv Detail & Related papers (2023-12-28T23:15:18Z) - Efficient Methods for Non-stationary Online Learning [67.3300478545554]
We present efficient methods for optimizing dynamic regret and adaptive regret, which reduce the number of projections per round from $mathcalO(log T)$ to $1$.
Our technique hinges on the reduction mechanism developed in parameter-free online learning and requires non-trivial twists on non-stationary online methods.
arXiv Detail & Related papers (2023-09-16T07:30:12Z) - Regret-Optimal Model-Free Reinforcement Learning for Discounted MDPs
with Short Burn-In Time [13.545356254920584]
We introduce a model-free algorithm that employs variance reduction and a novel technique that switches the execution policy in a slow-yet-adaptive manner.
This is the first regret-optimal model-free algorithm in the discounted setting, with the additional benefit of a low burn-in time.
arXiv Detail & Related papers (2023-05-24T20:22:43Z) - Online Joint Assortment-Inventory Optimization under MNL Choices [14.530542487845732]
We study an online joint assortment-inventory optimization problem, in which we assume that the choice behavior of each customer follows the Multinomial Logit (MNL) choice model.
We propose a novel algorithm that can effectively balance the exploration and exploitation in the online decision-making of assortment and inventory.
arXiv Detail & Related papers (2023-04-04T09:25:34Z) - Smoothed Online Combinatorial Optimization Using Imperfect Predictions [27.201074566335222]
We study smoothed online optimization problems when an imperfect predictive model is available.
We show that using predictions to plan for a finite time horizon leads to regret dependent on the total predictive uncertainty and an additional switching cost.
Our algorithm shows a significant improvement in cumulative regret compared to other baselines in synthetic online distributed streaming problems.
arXiv Detail & Related papers (2022-04-23T02:30:39Z) - Teaching Networks to Solve Optimization Problems [13.803078209630444]
We propose to replace the iterative solvers altogether with a trainable parametric set function.
We show the feasibility of learning such parametric (set) functions to solve various classic optimization problems.
arXiv Detail & Related papers (2022-02-08T19:13:13Z) - Online hyperparameter optimization by real-time recurrent learning [57.01871583756586]
Our framework takes advantage of the analogy between hyperparameter optimization and parameter learning in neural networks (RNNs)
It adapts a well-studied family of online learning algorithms for RNNs to tune hyperparameters and network parameters simultaneously.
This procedure yields systematically better generalization performance compared to standard methods, at a fraction of wallclock time.
arXiv Detail & Related papers (2021-02-15T19:36:18Z) - Cost-Efficient Online Hyperparameter Optimization [94.60924644778558]
We propose an online HPO algorithm that reaches human expert-level performance within a single run of the experiment.
Our proposed online HPO algorithm reaches human expert-level performance within a single run of the experiment, while incurring only modest computational overhead compared to regular training.
arXiv Detail & Related papers (2021-01-17T04:55:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.