Unconstrained Dynamic Regret via Sparse Coding
- URL: http://arxiv.org/abs/2301.13349v5
- Date: Wed, 25 Oct 2023 18:30:39 GMT
- Title: Unconstrained Dynamic Regret via Sparse Coding
- Authors: Zhiyu Zhang, Ashok Cutkosky, Ioannis Ch. Paschalidis
- Abstract summary: Motivated by the challenge of nonstationarity in sequential decision making, we study Online Convex Optimization (OCO) under the coupling of two problem structures.
This paper achieves a new type of these adaptive regret bounds via a sparse coding framework.
- Score: 46.85145189210752
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Motivated by the challenge of nonstationarity in sequential decision making,
we study Online Convex Optimization (OCO) under the coupling of two problem
structures: the domain is unbounded, and the comparator sequence
$u_1,\ldots,u_T$ is arbitrarily time-varying. As no algorithm can guarantee low
regret simultaneously against all comparator sequences, handling this setting
requires moving from minimax optimality to comparator adaptivity. That is,
sensible regret bounds should depend on certain complexity measures of the
comparator relative to one's prior knowledge.
This paper achieves a new type of these adaptive regret bounds via a sparse
coding framework. The complexity of the comparator is measured by its energy
and its sparsity on a user-specified dictionary, which offers considerable
versatility. Equipped with a wavelet dictionary for example, our framework
improves the state-of-the-art bound (Jacobsen & Cutkosky, 2022) by adapting to
both ($i$) the magnitude of the comparator average $||\bar
u||=||\sum_{t=1}^Tu_t/T||$, rather than the maximum $\max_t||u_t||$; and ($ii$)
the comparator variability $\sum_{t=1}^T||u_t-\bar u||$, rather than the
uncentered sum $\sum_{t=1}^T||u_t||$. Furthermore, our analysis is simpler due
to decoupling function approximation from regret minimization.
Related papers
- Obtaining Lower Query Complexities through Lightweight Zeroth-Order Proximal Gradient Algorithms [65.42376001308064]
We propose two variance reduced ZO estimators for complex gradient problems.
We improve the state-of-the-art function complexities from $mathcalOleft(minfracdn1/2epsilon2, fracdepsilon3right)$ to $tildecalOleft(fracdepsilon2right)$.
arXiv Detail & Related papers (2024-10-03T15:04:01Z) - An Equivalence Between Static and Dynamic Regret Minimization [10.812831455376218]
We show that for linear losses, dynamic regret minimization is equivalent to static regret minimization in an extended decision space.
We provide an algorithm guaranteeing dynamic regret of the form $R_T(u_1,dots,u_T)le tilde.
arXiv Detail & Related papers (2024-06-03T17:54:58Z) - Nearly Minimax Optimal Regret for Learning Linear Mixture Stochastic
Shortest Path [80.60592344361073]
We study the Shortest Path (SSP) problem with a linear mixture transition kernel.
An agent repeatedly interacts with a environment and seeks to reach certain goal state while minimizing the cumulative cost.
Existing works often assume a strictly positive lower bound of the iteration cost function or an upper bound of the expected length for the optimal policy.
arXiv Detail & Related papers (2024-02-14T07:52:00Z) - Stochastic Distributed Optimization under Average Second-order
Similarity: Algorithms and Analysis [36.646876613637325]
We study finite-sum distributed optimization problems involving a master node and $n-1$ local nodes.
We propose two new algorithms, SVRS and AccSVRS, motivated by previous works.
arXiv Detail & Related papers (2023-04-15T08:18:47Z) - Optimal Algorithms for Stochastic Complementary Composite Minimization [55.26935605535377]
Inspired by regularization techniques in statistics and machine learning, we study complementary composite minimization.
We provide novel excess risk bounds, both in expectation and with high probability.
Our algorithms are nearly optimal, which we prove via novel lower complexity bounds for this class of problems.
arXiv Detail & Related papers (2022-11-03T12:40:24Z) - Multi-block-Single-probe Variance Reduced Estimator for Coupled
Compositional Optimization [49.58290066287418]
We propose a novel method named Multi-block-probe Variance Reduced (MSVR) to alleviate the complexity of compositional problems.
Our results improve upon prior ones in several aspects, including the order of sample complexities and dependence on strongity.
arXiv Detail & Related papers (2022-07-18T12:03:26Z) - Submodular Maximization subject to a Knapsack Constraint: Combinatorial
Algorithms with Near-optimal Adaptive Complexity [13.416309759182635]
We obtain the first constant approximation for non-monotone submodular algorithms with near-optimal $O(log n)$ adaptive complexity.
Our algorithm asks $tildeO(n2)$ value queries, but can be modified to run with only $tildeO(n)$ instead.
This is also the first approach with sublinear adaptive complexity for the problem and yields comparable to the state-of-the-art even for special cases of cardinality constraints or monotone objectives.
arXiv Detail & Related papers (2021-02-16T18:15:51Z) - Escaping Saddle-Points Faster under Interpolation-like Conditions [19.9471360853892]
We show that under over-parametrization several standard optimization algorithms escape saddle-points and converge to local-minimizers much faster.
We discuss the first-order oracle complexity of Perturbed Gradient Descent (PSGD) algorithm to reach an $epsilon$ localminimizer.
We next analyze Cubic-Regularized Newton (SCRN) algorithm under-like conditions, and show that the oracle complexity to reach an $epsilon$ local-minimizer under-like conditions, is $tildemathcalO (1/epsilon2.5
arXiv Detail & Related papers (2020-09-28T02:15:18Z) - Practical and Parallelizable Algorithms for Non-Monotone Submodular
Maximization with Size Constraint [20.104148319012854]
We present and parallelizable for a submodular function, not necessarily a monotone, with respect to a size constraint.
We improve the best approximation factor achieved by an algorithm that has optimal adaptivity and nearly optimal complexity query to $0.193 - varepsilon$.
arXiv Detail & Related papers (2020-09-03T22:43:55Z) - Dynamic Regret of Convex and Smooth Functions [93.71361250701075]
We investigate online convex optimization in non-stationary environments.
We choose the dynamic regret as the performance measure.
We show that it is possible to further enhance the dynamic regret by exploiting the smoothness condition.
arXiv Detail & Related papers (2020-07-07T14:10:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.