Parameter-free Online Linear Optimization with Side Information via
Universal Coin Betting
- URL: http://arxiv.org/abs/2202.02406v1
- Date: Fri, 4 Feb 2022 21:56:29 GMT
- Title: Parameter-free Online Linear Optimization with Side Information via
Universal Coin Betting
- Authors: J. Jon Ryu, Alankrita Bhatt, Young-Han Kim
- Abstract summary: A class of parameter-free online linear optimization algorithms is proposed.
They harness the structure of an adversarial sequence by adapting some side information.
The proposed algorithm is further refined to achieve the best performance over all adaptive algorithms.
- Score: 21.584183030149084
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A class of parameter-free online linear optimization algorithms is proposed
that harnesses the structure of an adversarial sequence by adapting to some
side information. These algorithms combine the reduction technique of Orabona
and P{\'a}l (2016) for adapting coin betting algorithms for online linear
optimization with universal compression techniques in information theory for
incorporating sequential side information to coin betting. Concrete examples
are studied in which the side information has a tree structure and consists of
quantized values of the previous symbols of the adversarial sequence, including
fixed-order and variable-order Markov cases. By modifying the context-tree
weighting technique of Willems, Shtarkov, and Tjalkens (1995), the proposed
algorithm is further refined to achieve the best performance over all adaptive
algorithms with tree-structured side information of a given maximum order in a
computationally efficient manner.
Related papers
- Individualized Privacy Accounting via Subsampling with Applications in Combinatorial Optimization [55.81991984375959]
In this work, we give a new technique for analyzing individualized privacy accounting via the following simple observation.
We obtain several improved algorithms for private optimization problems, including decomposable submodular and set algorithm cover.
arXiv Detail & Related papers (2024-05-28T19:02:30Z) - A Generalized Approach to Online Convex Optimization [33.38582292895673]
We show that any algorithm for online linear optimization with fully adaptive adversaries is an algorithm for online convex optimization.
We show that any such algorithm that requires full-information feedback may be transformed to an algorithm with semi-bandit feedback with comparable regret bound.
arXiv Detail & Related papers (2024-02-13T17:42:27Z) - Online Combinatorial Linear Optimization via a Frank-Wolfe-based
Metarounding Algorithm [0.589889361990138]
We propose a new metarounding algorithm under a natural assumption that a relax-based approximation algorithm exists for the class.
Our algorithm is much more efficient in both theoretical and practical aspects.
arXiv Detail & Related papers (2023-10-19T10:22:03Z) - Accelerating Cutting-Plane Algorithms via Reinforcement Learning
Surrogates [49.84541884653309]
A current standard approach to solving convex discrete optimization problems is the use of cutting-plane algorithms.
Despite the existence of a number of general-purpose cut-generating algorithms, large-scale discrete optimization problems continue to suffer from intractability.
We propose a method for accelerating cutting-plane algorithms via reinforcement learning.
arXiv Detail & Related papers (2023-07-17T20:11:56Z) - Data-driven abstractions via adaptive refinements and a Kantorovich
metric [extended version] [56.94699829208978]
We introduce an adaptive refinement procedure for smart, and scalable abstraction of dynamical systems.
In order to learn the optimal structure, we define a Kantorovich-inspired metric between Markov chains.
We show that our method yields a much better computational complexity than using classical linear programming techniques.
arXiv Detail & Related papers (2023-03-30T11:26:40Z) - Linearization Algorithms for Fully Composite Optimization [61.20539085730636]
This paper studies first-order algorithms for solving fully composite optimization problems convex compact sets.
We leverage the structure of the objective by handling differentiable and non-differentiable separately, linearizing only the smooth parts.
arXiv Detail & Related papers (2023-02-24T18:41:48Z) - Adaptive Stochastic Optimisation of Nonconvex Composite Objectives [2.1700203922407493]
We propose and analyse a family of generalised composite mirror descent algorithms.
With adaptive step sizes, the proposed algorithms converge without requiring prior knowledge of the problem.
We exploit the low-dimensional structure of the decision sets for high-dimensional problems.
arXiv Detail & Related papers (2022-11-21T18:31:43Z) - Optimistic Optimisation of Composite Objective with Exponentiated Update [2.1700203922407493]
The algorithms can be interpreted as the combination of the exponentiated gradient and $p$-norm algorithm.
They achieve a sequence-dependent regret upper bound, matching the best-known bounds for sparse target decision variables.
arXiv Detail & Related papers (2022-08-08T11:29:55Z) - Tree ensemble kernels for Bayesian optimization with known constraints
over mixed-feature spaces [54.58348769621782]
Tree ensembles can be well-suited for black-box optimization tasks such as algorithm tuning and neural architecture search.
Two well-known challenges in using tree ensembles for black-box optimization are (i) effectively quantifying model uncertainty for exploration and (ii) optimizing over the piece-wise constant acquisition function.
Our framework performs as well as state-of-the-art methods for unconstrained black-box optimization over continuous/discrete features and outperforms competing methods for problems combining mixed-variable feature spaces and known input constraints.
arXiv Detail & Related papers (2022-07-02T16:59:37Z) - Adaptive First- and Second-Order Algorithms for Large-Scale Machine
Learning [3.0204520109309843]
We consider first- and second-order techniques to address continuous optimization problems in machine learning.
In the first-order case, we propose a framework of transition from semi-deterministic to quadratic regularization methods.
In the second-order case, we propose a novel first-order algorithm with adaptive sampling and adaptive step size.
arXiv Detail & Related papers (2021-11-29T18:10:00Z) - Adaptivity of Stochastic Gradient Methods for Nonconvex Optimization [71.03797261151605]
Adaptivity is an important yet under-studied property in modern optimization theory.
Our algorithm is proved to achieve the best-available convergence for non-PL objectives simultaneously while outperforming existing algorithms for PL objectives.
arXiv Detail & Related papers (2020-02-13T05:42:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.